R Consortium Wiki
https://wiki.r-consortium.org/view/Main_Page
MediaWiki 1.23.15
first-letter
Media
Special
Talk
User
User talk
R Consortium Wiki
R Consortium Wiki talk
File
File talk
MediaWiki
MediaWiki talk
Template
Template talk
Help
Help talk
Category
Category talk
Campaign
Campaign talk
Code Coverage Tool for R
0
3
84
47
2018-04-12T15:46:51Z
MarkHornick
5
wikitext
text/x-wiki
'''Working Group: Code Coverage Tool for R'''
Code coverage helps to ensure greater software quality by reporting how thoroughly test suites cover the various code paths. Having a tool that supports the breadth of the R language across multiple platforms, and that is used by R package developers and R core teams, helps to improve software quality for the R Community. While a few code coverage tools exist for R, this Oracle-proposed ISC project aims to provide an enhanced tool that addresses feature and platform limitations of existing tools via an ISC-established working group. It also aims to promote the use of code coverage more systematically within the R ecosystem.
The Code Coverage group released a revised covr package over the summer 2017 ([https://www.r-consortium.org/blog/2017/06/28/code-coverage-tool-for-r-working-group-achieves-first-release]).
Github project: https://github.com/jimhester/covr/issues
=== Working Group Members ===
* Shivank Agrawal, Oracle
* Chris Campbell, Mango Solutions
* Santosh Chaudhari, Oracle
* Karl Forner, Quartz Bio
* Jim Hester, RStudio
* Mark Hornick, Oracle – Group Leader
* Chen Liang, Oracle
* Willem Ligtenberg, Open Analytics
* Vlad Sharanhovich, Oracle
* Tobias Verbeke, Open Analytics
* Qin Wang, Oracle
* Hadley Wickham, RStudio – ISC Sponsor
=== Status as of August 10, 2017 ===
* Branch coverage with profile stats – Deferred
* Define canonical performance benchmark suite – Done
* #144 store test results as well as coverage – Deferred
* #134 ICC compatibility – Done
* Make code coverage use more pervasive in R community – in progress
* Correct behavior for parallel code execution – DONE
* Resolve package dependency license issues – DONE
* #174 R6 methods are not covered - DONE
* #117 covr with a local jenkins builder – DONE
* #99 Shiny Source Tab Indicate the Filename – DONE
* Observations testing ORE with covr – DONE
=== Meeting History ===
* November 30, 2017
* November 8, 2017
* September 14, 2017
* August 24, 2017
* August 10, 2017
* July 27, 2017
* June 15, 2017
* March 9, 2017
* February 16, 2017
* January 26, 2017
* January 5, 2017
* December 15, 2016
* October 20, 2016
* October 4, 2016
* September, 22, 2016
* September 8, 2016
* August 18, 2016
* July 28, 2016
* July 7, 2016
* June 16, 2016
* June 2, 2016
* May 24, 2016
* May 15, 2016
6cdc9fed1a4c8035f357f3da745a54b43b4d5a39
47
14
2016-10-18T20:02:05Z
MarkHornick
5
wikitext
text/x-wiki
'''Working Group: Code Coverage Tool for R'''
Code coverage helps to ensure greater software quality by reporting how thoroughly test suites cover the various code paths. Having a tool that supports the breadth of the R language across multiple platforms, and that is used by R package developers and R core teams, helps to improve software quality for the R Community. While a few code coverage tools exist for R, this Oracle-proposed ISC project aims to provide an enhanced tool that addresses feature and platform limitations of existing tools via an ISC-established working group. It also aims to promote the use of code coverage more systematically within the R ecosystem.
Github project: https://github.com/jimhester/covr/issues
=== Working Group Members ===
* Shivank Agrawal, Oracle
* Chris Campbell, Mango Solutions
* Santosh Chaudhari, Oracle
* Karl Forner, Quartz Bio
* Jim Hester, RStudio
* Mark Hornick, Oracle – Group Leader
* Chen Liang, Oracle
* Willem Ligtenberg, Open Analytics
* Andy Nicholls, Mango Solutions
* Vlad Sharanhovich, Oracle
* Tobias Verbeke, Open Analytics
* Qin Wang, Oracle
* Hadley Wickham, RStudio – ISC Sponsor
=== Status as of October 4, 2016 ===
* Branch coverage with profile stats – in progress
* Define canonical performance benchmark suite – in progress
* #144 store test results as well as coverage – in progress
* #134 ICC compatibility – TESTS COMPLETE
* Make code coverage use more pervasive in R community – in progress
* Correct behavior for parallel code execution – DONE
* Resolve package dependency license issues – DONE
* #174 R6 methods are not covered - DONE
* #117 covr with a local jenkins builder – DONE
* #99 Shiny Source Tab Indicate the Filename – DONE
* Observations testing ORE with covr – DONE
=== Meeting History ===
* October 4, 2016
* September, 22, 2016
* September 8, 2016
* August 18, 2016
* July 28, 2016
* July 7, 2016
* June 16, 2016
* June 2, 2016
* May 24, 2016
* May 15, 2016
d09485ffb3d761a3ece1afceca526e6a7268bfd8
14
8
2016-06-17T16:11:52Z
MarkHornick
5
wikitext
text/x-wiki
'''Working Group: Code Coverage Tool for R'''
Code coverage helps to ensure greater software quality by reporting how thoroughly test suites cover the various code paths. Having a tool that supports the breadth of the R language across multiple platforms, and that is used by R package developers and R core teams, helps to improve software quality for the R Community. While a few code coverage tools exist for R, this Oracle-proposed ISC project aims to provide an enhanced tool that addresses feature and platform limitations of existing tools via an ISC-established working group. It also aims to promote the use of code coverage more systematically within the R ecosystem.
ISC Proposal Document
Meeting Minutes
=== Working Group Members ===
* Shivank Agrawal, Oracle
* Chris Campbell, Mango Solutions
* Santosh Chaudhari, Oracle
* Karl Forner, Quartz Bio
* Jim Hester, RStudio
* Mark Hornick, Oracle – Group Leader
* Chen Liang, Oracle
* Willem Ligtenberg, Open Analytics
* Andy Nicholls, Mango Solutions
* Vlad Sharanhovich, Oracle
* Tobias Verbeke, Open Analytics
* Qin Wang, Oracle
* Hadley Wickham, RStudio – ISC Sponsor
5edf4b62d84464143580ff67af5050b3a891cb69
8
7
2016-05-13T18:20:54Z
MarkHornick
5
wikitext
text/x-wiki
'''Working Group: Code Coverage Tool for R'''
Code coverage helps to ensure greater software quality by reporting how thoroughly test suites cover the various code paths. Having a tool that supports the breadth of the R language across multiple platforms, and that is used by R package developers and R core teams, helps to improve software quality for the R Community. While a few code coverage tools exist for R, this Oracle-proposed ISC project aims to provide an enhanced tool that addresses feature and platform limitations of existing tools via an ISC-established working group. It also aims to promote the use of code coverage more systematically within the R ecosystem.
ISC Proposal Document
Meeting Minutes
857f23d392800954460754645868233b620a9738
7
6
2016-05-13T18:20:37Z
MarkHornick
5
wikitext
text/x-wiki
'''Working Group: Code Coverage Tool for R'''
Code coverage helps to ensure greater software quality by reporting how thoroughly test suites cover the various code paths. Having a tool that supports the breadth of the R language across multiple platforms, and that is used by R package developers and R core teams, helps to improve software quality for the R Community. While a few code coverage tools exist for R, this Oracle-proposed ISC project aims to provide an enhanced tool that addresses feature and platform limitations of existing tools via an ISC-established working group. It also aims to promote the use of code coverage more systematically within the R ecosystem.
ISC Proposal Document
Meeting Minutes
425274f52d0b101c72f7a63c3916bddddd553a52
6
2016-03-23T22:00:31Z
MarkHornick
5
Created page with "'''Working Group: Code Coverage Tool for R''' Code coverage helps to ensure greater software quality by reporting how thoroughly test suites cover the various code paths. Hav..."
wikitext
text/x-wiki
'''Working Group: Code Coverage Tool for R'''
Code coverage helps to ensure greater software quality by reporting how thoroughly test suites cover the various code paths. Having a tool that supports the breadth of the R language across multiple platforms, and that is used by R package developers and R core teams, helps to improve software quality for the R Community. While a few code coverage tools exist for R, this Oracle-proposed ISC project aims to provide an enhanced tool that addresses feature and platform limitations of existing tools via an ISC-established working group. It also aims to promote the use of code coverage more systematically within the R ecosystem.
c01fb120da6b858ef7a6d264fcda7eb2b50783a1
Ddr2016
0
19
73
2017-05-01T17:59:37Z
MichaelLawrence
9
Created page with "= Distributed Computing Working Group Progress Report: 2016 = == Introduction == Data sizes continue to increase, while single core performance has stagnated. We scale comp..."
wikitext
text/x-wiki
= Distributed Computing Working Group Progress Report: 2016 =
== Introduction ==
Data sizes continue to increase, while single core performance has
stagnated. We scale computations by leveraging multiple cores and
machines. Large datasets are expensive to replicate, so we minimize
data movement by moving the computation to the data. Many systems,
such as Hadoop, Spark, and massively parallel processing (MPP)
databases, have emerged to support these strategies, and each exposes
its own unique interface, with little standardization.
Developing and executing an algorithm in the distributed context is a
complex task that requires specific knowledge of and dependency on the
system storing the data. It is also a task orthogonal to the primary
role of a data scientist or statistician: extracting knowledge from
data. The task thus falls to the data analysis environment, which
should mask the complexity behind a familiar interface, maintaining
user productivity. However, it is not always feasible to automatically
determine the optimal strategy for a given problem, so user input is
often beneficial. The environment should only abstract the details to
the extent deemed appropriate by the user.
R needs a standardized, layered and idiomatic abstraction for
computing on distributed data structures. R has many packages that
provide parallelism constructs as well as bridges to distributed
systems such as Hadoop. Unfortunately, each interface has its own
syntax, parallelism techniques, and supported platform(s). As a
consequence, contributors are forced to learn multiple idiosyncratic
interfaces, and to restrict each implementation to a particular
interface, thus limiting the applicability and adoption of their
software and hampering interoperability.
The idea of a unified interface stemmed from a cross-industry workshop
organized at HP Labs in early 2015. The workshop was attended by
different companies, universities, and R-core members. Immediately
after the workshop, Indrajit Roy, Edward Ma, and Michael Lawrence began
designing an abstraction that later became known as the CRAN package
ddR (Distributed Data in R)[1]. It declares a unified API for distributed
computing in R and ensures that R programs written using the API are
portable across different systems, such as Distributed R, Spark, etc.
The ddR package has completed its initial phase of development; the
first release is now on CRAN. Three ddR machine-learning algorithms
are also on CRAN, randomForest.ddR, glm.ddR, and kmeans.ddR. Two
reference backends for ddR have been completed, one for R’s parallel
package, and one for HP Distributed R. Example code and scripts to run
algorithms and code on both of these backends are available in our
public repository at https://github.com/vertica/ddR.
The overarching goal of the ddR project was for it to be a starting
point in a collaborative effort, ultimately leading to a standard API
for working with distributed data in R. We decided that it was
natural for the R Consortium to sponsor the collaboration, as it
should involve both industry and R-core members. To this end, we
established the R Consortium Working Group on Distributed Computing,
with a planned duration of a single year and the following aims:
# Agree on the goal of the group, i.e., we should have a unifying framework for distributed computing. Define success metric.
# Brainstorm on what primitives should be included in the API. We can use ddR’s API of distributed data-structures and dmapply as the starting proposal. Understand relationship with existing packages such as parallel, foreach, etc.
# Explore how ddR like interface will interact with databases. Are there connections or redundancies with dplyr and multiplyr?
# Decide on a reference implementation for the API.
# Decide on whether we should also implement a few ecosystem packages, e.g., distributed algorithms written using the API.
We declared the following milestones:
# Mid-year milestone: Finalize API. Decide who all will help with developing the top-level implementation and backends.
# End-year milestone: Summary report and reference implementation. Socialize the final package.
This report outlines the progress we have made on the above goals and
milestones, and how we plan to continue progress in the second half of
the working group term.
== Results and Current Status ==
The working group has achieved the first goal by agreeing that we
should aim for a unifying distributed computing abstraction, and we
have treated ddR as an informal API proposal.
We have discussed many of the issues related to the second goal,
deciding which primitives should be part of the API. We aim for the
API to support three shapes of data --- lists, arrays and data frames
--- and to enable the loading and basic manipulation of distributed
data, including multiple modes of functional iteration (e.g., apply()
operations). We aim to preserve consistency with base R data
structures and functions, so as to provide a simple path for users to
port computations to distributed systems.
The ddR constructs permit a user to express a wide variety of
applications, including machine-learning algorithms, that will run on
different backends. We have successfully implemented distributed
versions of algorithms such as K-means, Regression, Random Forest, and
PageRank using the ddR API. Some of these ddR algorithms are now
available on CRAN. In addition, the package provides several generic
definitions of common operators (such as colSums) that can be invoked
on distributed objects residing in the supporting backends.
Each custom ddR backend is encapsulated in its own driver package. In
the conventional style of functional OOP, the driver registers methods
for generics declared by the backend API, such that ddR can dispatch
the backend-specific instructions by only calling the generics.
The working group explored potential new backends with the aim of
broadening the applicability of the ddR interface. We hosted
presentations from external speakers on Spark and TensorFlow, and also
considered a generic SQL backend. The discussion focused on Spark
integration, and the R Consortium-funded intern Clark Fitzgerald took
on the task of developing a prototype Spark backend. The development
of the Spark backend encountered some obstacles, including the
immaturity of Spark and its R interfaces. Development is currently
paused, as we await additional funding.
During the monthly meetings, the working group deliberated on
different design improvements for ddR itself. We list two key topics
that were discussed. First, Michael Kane and Bryan Lewis argued for a
lower level API that directly operates on chunks of data. While ddR
supports chunk-wise data processing, via a combination of dmapply()
and parts(), its focus on distributed data structures means that
the chunk-based processing is exposed as the manipulation of these
data structures. Second, Clark Fitzgerald proposed restructuring the
ddR code into two layers that includes chunk-wise processing while
retaining the emphasis on distributed data structures[2]. The lower
level API, which will interface with backends, will use a Map() like
primitive to evaluate functions on chunks of data, while the higher
level ddR API will expose distributed data structures, dmapply, and
other convenience functions. This refactoring would facilitate the
implementation of additional backends.
== Discussion and Future Plans ==
The R Consortium-funded working group and internship has helped us
start a conversation on distributed computing APIs for R. The ddR
CRAN package is a concrete outcome of this working group, and serves
as a platform for exploring APIs and their integration with different
backends. While ddR is still maturing, we have arrived at a consensus
for how we should improve and finalize the ddR API.
As part of our goal for a reference implementation, we aim to develop
one or more prototype backends that will make the ddR interface useful
in practice. A good candidate backend is any open-source system that
is effective at R use cases and has strong community support. Spark
remains a viable candidate, and we also aim to further explore
TensorFlow.
We plan for a second intern to perform three tasks: (1) refactor the
ddR API to a more final form, (2) compare Spark and TensorFlow in
detail, with an eye towards the feasibility of implementing a useful
backend, and (3) implement a prototype backend based on Spark or
TensorFlow, depending on the recommendation of the working group.
By the conclusion of the working group, it will have produced:
* A stable version of the ddR package and at least one practical backend, released on CRAN,
* A list of requirements that are relevant and of interest to the community but have not yet been met by ddR, including alternative implementations that remain independent,
* A list of topics that the group believes worthy of further investigation.
[1] http://h30507.www3.hp.com/t5/Behind-the-scenes-Labs/Enhancing-R-for-Distributed-Computing/ba-p/6795535#.VjE1K7erQQj
[2] Clark Fitzgerald. https://github.com/vertica/ddR/wiki/Design
b2d84c4d967e694ba77d843d8d13965aaa43d746
Distributed Computing Working Group
0
12
80
78
2017-05-12T02:53:20Z
Indrajit roy
14
wikitext
text/x-wiki
== Goals and Purpose ==
The Distributed Computing Working Group will endorse the design of a common abstraction for distributed data structures in R. We aim to have at least one open-source implementation, as well as a SQL implementation, released within a year of forming the group.
== Members ==
* '''Michael Lawrence''' (Genentech)
* '''Indrajit Roy''' (Google)
* ''Joe Rickert'' (ISC liason, RStudio)
* Bernd Bischl (LMU)
* Matt Dowle (H2O)
* Mario Inchiosa (Microsoft)
* Michael Kane (Yale)
* Javier Luraschi (RStudio)
* Edward Ma (HP Enterprise)
* Luke Tierney (University of Iowa)
* Simon Urbanek (AT&T)
* Bryan Lewis (Paradigm4)
* Hossein Falaki (databricks)
== Reports ==
[[Distributed Computing Working Group Progress Report 2016|2016 Progress Report]]
== Milestones ==
=== Achieved ===
* Adopt ddR as a prototype for a standard API for distributed computing in R
=== 2016 Internship ===
Clark Fitzgerald, a PhD student in the UC Davis Statistics department, worked on ddR and Spark integration.
* Wrote [https://github.com/clarkfitzg/sparklite sparklite] and [https://github.com/clarkfitzg/rddlist rddlist] as minimal proof-of-concept R packages to connect and store general data on Spark. [https://docs.google.com/presentation/d/1WfUQ2ockNku90GWMXonEhUEcVOWcgBmWwt5uYSSBYPY/edit?usp=sharing slides]
* [https://issues.apache.org/jira/browse/SPARK-16785 Patched SparkR] to allow user defined functions returning binary columns. This allows implementation of different data structures in SparkR.
* Updated [https://github.com/vertica/ddR/wiki/Design design documents] with suggested changes to DDR's internal design and object oriented model.
* Improved [https://github.com/vertica/ddR/pull/15 testing and ddR internals].
=== Outstanding ===
* Agree on a final standard API for distributed computing in R
* Implement at least one scalable backend based on an open-source technology like Spark, SQL, etc
== Open Questions ==
* How can we address the needs of both the end user data scientists and the algorithm implementers?
* How should we share data between R and a system like Spark?
* Is there any way to unify SparkR and sparklyr?
* Could we use the abstractions of tensorflow to partially or fully integrate with platforms like Spark?
== Minutes ==
=== 05/11/2017 ===
* Hernik presented his work on “Futures in R: Atomic Building Blocks for Asynchronous Evaluation”
** Future is an abstraction for a value that will be available later
** Futures are in one of the two states— resolved or unresolved
** Syntax for explicit future:
f<- future(expo)
v<-value(f)
** There are many ways to resolve futures. E.g., sequential, multicore, multisession, etc.
** Aim is to make the future package “write once, run anywhere”. It works on all platforms.
** In future api, the developer decides what to parallelize and the user decides how to
** future.Batchjobs a future API on top of BatchJobs (map-reduce API for HPC schedulers)
** Hernik presented an example of using DNA-sequence files with the future API.
** Futures can be nested. E.g., one can create a future in an outer loop and create more futures within the inner loop.
** The plan() function can be used with futures to decide whether the futures are run in parallel or sequentially.
** The doFuture() package is a foreach adaptor. It allows foreach to utilize HPC clusters.
Slides are available here: [[File:BengtssonH 20170511-future,RConsortium,flat.pdf|thumb|Futures in R]]
=== 03/09/2017 ===
* Talk by Brian Lewis
** Has created a Github page with notes on Singularity.
** Singularity is a container technology for HPC applications
** No daemon. Minimum virtualization to get application running. Light weight and has very low overheads.
** Used widely in supercomputers
** All distributed computing platforms even with R skins are difficult to use.
** Containers make it much easier to abstract away the long tail of software dependencies and focus on R
** Demonstrated an example of using Singularity with Tensorflow
** Tried MPI and dopar on the 1000Genome data
** The program parses the variant data and stores chunks as files. Then ran principal components on each file.
** Overload matrix operations to use foreach/MPI underneath.
** Overall: Use existing R operators and overloading them with the appropriate backend.
** Will spend time working on Tensorflow, e.g., take a number of algorithms such as PCA and write them on top of Tensorflow using existing R primitives.
=== 12/08/2016 ===
* Yuan Tang from Uptake was the presenter
** Michael and Indrajit will write a status report for the working group sometime in December or January
** Yuan gave an overview of TensorFlow
** JJ, Dirk and Yuan are working on R layer for TensorFlow
** TensorFlow is a platform for machine learning as well as other computations (even math proofs).
** It is GPU optimized and distributed.
** It is used in search, speech recognition, Google photos, etc.
** TensorFlow computations are directed graphs. odes are operations and edges are tensors.
** A lot of array, matrix, etc. operations are available
** Backend is mostly C++. Python front end exists.
** TensorFlow R is based on the python fronted
** In multi-device setting, TensorFlow figures out which devices to use and manages communication between devices.
** Computations are fault tolerant
** Yuan has previously worked on Scikit Flow which is now TF.Learn. It’s a easy transition for Scikit learn users.
** Yuan gave a brief overview of the python interface
** TensorFlow in R handles conversion between R and Python. Syntax is very similar to python API
** Future work: Adding more examples and tutorials, integration with Kubernetes/Marathon like framework.
** During the Q/A there were questions related to whether R kernels can be supported in TensorFlow, and whether R dataframes are a natural wrapper for TensorFlow objects.
=== 11/10/2016 ===
* SparkR slides were presented by Hossein Faliki and Shivaram from Databricks and UC Berkeley:
** SparkR was a prototype from AMPLab (2014). Initially it had the RDD API and was similar to PySpark API
** In 2015, the merge with upstream Spark, the decision was made to integrate with the Dataframe API, and hide the RDD API
** In 2016 more MLLib algorithms have been integrated and new APIs have been added. A CRAN package will be released soon
** Original SparkR architecture runs R on the master that communicates with the JVM processes in the driver. the driver sends commands to the worker JVM processes, and executes them as scala/java statements.
** The system can read distributed data inside the JVM from different sources such as S3, HDFS, etc.
** The driver has a socket based connection between SparkR and the RBackend. RBackend runs on the JVM, deserializes the R code, and converts the R statements into Java calls.
** collect() and createDataFrame() are used to move data between R and JVM processes. createDataFrame will convert your local R data into a JVM based distributed data frame.
** The API has IO, Caching, MLLib, and SQL related commands
** Since Spark 2.0, we can run R processes inside the JVM worker processes. There is no need to keep long running R processes.
** There are 3 UDF functions (1) lapply, runs function on different value of a list (2) dapply, runs function on each partition of a data frame. You have to careful about how data is partitioned, and (3) gapply, performs a grouping on different column names and then runs the function on each group.
** The new CRAN package install.spark() will automatically download and install Spark. Automated CRAN checks have been added to every commit to the code. Should be available with Spark 2.1.0
* Q/A
** Currently trying to get zero copy dataframe between python and Spark. Spark 2.0 has an off heap manager that uses Arrow. Once this feature is tested on the Python API, the next step will be integration R.
** Spark dataframes gain from plan optimizations. It is not SparkR specific. R UDFs are still treated as black boxes by the optimizer
** Spark doesn't directly support matrixes. There is no immediate intent to do so either. One can store an array or vector as a single column of a Spark dataframe.
=== 10/13/2016 ===
''Detailed minutes were not taken for this meeting''
* Mario Inchiosa: Microsoft's perspective on distributed computing with R
** Microsoft R Server: abstractions and algorithms for distributed computation on top of open-source R
** Desired features of a distributed API like ddR:
*** Supports PEMA (initialize, processData, updateResults, processResults)
*** Cross-platform
*** Fast runtime
*** Supports algorithm writer and data scientist
*** Comes with a comprehensive set of algorithms
*** Easy deployment
** ddR is making good progress but does not yet meet those requirements
* Indrajit: ddR progress report and next steps
** Recap of Clark's internship
** Next step: implement some of Clark's design suggestions: https://github.com/vertica/ddR/wiki/Design
** Spark integration will be based on sparklyr
** Should we limit Spark interaction to the DataFrame API or directly interact with RDDs?
*** Consensus: will likely need flexibility of RDDs to implement everything we need, e.g., arrays and lists
** Clark and Javier raised concerns about the scalability of sharing data between R and Spark
*** Michael: Spark is a platform in its own right, so interoperability is important, should figure something out
*** Bryan Lewis: Why not use tensor abstraction from tensorflow? Spark supports tensorflow and an R interface is already in the works.
** Michael raised the issue of additional funding from the R Consortium to continue Clark's work
*** Joe Rickert suggested that the working group develop one or more white papers summarizing the findings of the working group for presentation to the Infrastructure Steering Committee.
*** Consensus was in favor of this, and several pointed out that the progress so far has been worthwhile, despite not meeting the specific goals laid out in the proposal.
* Michael: do we want to invite some external speakers, one per meeting, from groups like databricks, tensorflow, etc?
** Consensus was in favor.
=== 9/8/2016 ===
''Detailed minutes were not taken for this meeting''
* Clark Fitzgerald: internship report
** Developed two packages for low-level Spark integration: rddlist, sparklite
** Patched a bug in Spark
** ddR needs refactoring before Spark integration is feasible:
*** dlist, dframe, and darray should be formal classes.
*** Partitions of data should be represented by a distributed list abstraction, and most functions (e.g., dmapply) should be implemented on top of that list.
* Javier: sparklyr update
** Preparing for CRAN release
** Mario: what happened to sparkapi?
*** Javier: sparkapi has been merged into sparklyr in order to avoid overhead of maintaining two packages. ddR can do everything it needs with sparklyr.
* Luke Tierney: Update on the low-level vector abstraction, which might support interfaces like ddR and sparklyr.
** Overall approach seems feasible, but still working out a few details.
** Will land in a branch soon.
* Bernd Bischl: update on the batchtools package
** Successor to BatchJobs based on in-memory database
=== 8/11/2016 ===
''Meeting was canceled due to lack of availability.''
=== 7/14/2016 ===
* Introduced Clark who is the intern funded by R Consortium. Clark is a graduate student from UC Davis. He will work on ddR integration with Spark and improving the core ddR API as well such as adding a distributed apply() for matrices, split function, etc.
* Bernd: Can I play around with ddR now? What backend should I use? How robust is the code?
** Clark: It's in good enough shape to be played around with. We will continue to improve it. Hopefully the spark integration will be done before the end of my internship in September.
* Q: Is anyone working on using ddR to make ML scale better.
** Indrajit: We have kmeans, glm, etc. already in CRAN.
** Michael Kane: We are working on glmnet and other packages related to algorithm development.
* Javier gave a demo of sparklyr and sparkapi.
** Motivation for the pacakage: The SparkR package overrides the dplyr interface. This is an issue for RStudio. SparkR is not a CRAN package which makes it difficult to add changes. dplyr is the most popular tool by RStudio and is broken on SparkR.
** Sparklyr provides a dplyr interface. It will also support ML like interfaces, such as consuming a ML model.
** Sparklyr does not currently support any distributed computing features. Instead we can recommend ddR as the distributed computing framework on top of sparkapi. We will put the code in CRAN in a couple of weeks.
** Simon: Can you talk more about the wrapper/low level API to work with Spark?
*** Javier: The package underneath the cover is called "sparkapi" it is to be used by pacakge builders. "spark_context()" and "invoke()" are the functionality to call scala methods. It does not you to currently run R user defined functions. I am currently working on enabling that feature. Depending upon the interest in using ddR with sparkapi, I can spend more time to make sparkapi feature rich.
** Indrajit: What versions of Spark are supported
*** Javier: Anything after 1.6
** Bernd: How do you export data?
*** Javier: We are using all the code from SparkR. So everything in SparkR should continue to work. We don't need to change SparkR. We just need to maintain compatibility.
** Bernd: What happens when the RDDs are very large?
*** Javier: Spark will spill on disk.
* Michael Kane: Presented examples that he implemented on ddR.
** Talked about how the different distributed packages compare to each other in terms of functionality.
** Michael K. looked at glm and truncated SVD on ddR. Was able to implement irls on ddR by implementing two distributed functions such as "cross". In truncated SVD only needed to overload two different distributed multiplications.
** Ran these algorithms on the 1000 genome dataset.
** Overall liked ddR since it was easy to implement the algorithms in the package.
** New ideas:
*** Trying to separate the data layer from the execution layer
*** Create an API that works on "chunks" (which is similar to the "parts" API in ddR). Would like to add these APIs to ddR.
*** Indrajit: You should be able to get some of the chunk like features by using parts and dmapply. E.g., you can call dmapply to read 10 different files, which correspond 10 chunks now. These are however wrapped as a darray or dframe. But you can continue to work on the individual chunks by using parts(i).
=== 6/2/2016 ===
* Round table introduction
* (Michael) Goals for the group:
** Make a common abstraction/interfaces to make it easier to work with distributed data and R
** Unify the interface
** Working group will run for a year. Get an API defined, get at least one open source reference implementations
** not everyone needs to work hands on. We will create smaller groups to focus on those aspects.
** We tried to get a diverse group of participants
* Logistics: meet monthly, focus groups may meet more often
* R Consoritum may be able to figure ways to fund smaller projects that come out of the working group
* Michael Kane: Should we start with an inventory of what is available and people are using?
** Michael Lawrence: Yes, we should find the collection of tools as well as the use cases that are common.
** Joe: I will figure out a wiki space.
* Javier: Who are the end users? Simon: Common layer needed to get algorithms working. We started from algos and tried to find the minimal common api. One of the goals is to make sure everyone is on the same page and not trying to create his/her own custom interface.
* Javier: Should we try to get people with more algo expertise?
* Joe: Simon do you have a stack diagram?
* Simon: Can we get R Consortium to help write things up and draw things?
* Next meeting: Javier is going to present SparkR next time.
8a51c2958e685a43b6564702a0887d8148865fb0
78
77
2017-05-12T02:39:26Z
Indrajit roy
14
wikitext
text/x-wiki
== Goals and Purpose ==
The Distributed Computing Working Group will endorse the design of a common abstraction for distributed data structures in R. We aim to have at least one open-source implementation, as well as a SQL implementation, released within a year of forming the group.
== Members ==
* '''Michael Lawrence''' (Genentech)
* '''Indrajit Roy''' (Google)
* ''Joe Rickert'' (ISC liason, RStudio)
* Bernd Bischl (LMU)
* Matt Dowle (H2O)
* Mario Inchiosa (Microsoft)
* Michael Kane (Yale)
* Javier Luraschi (RStudio)
* Edward Ma (HP Enterprise)
* Luke Tierney (University of Iowa)
* Simon Urbanek (AT&T)
* Bryan Lewis (Paradigm4)
* Hossein Falaki (databricks)
== Reports ==
[[Distributed Computing Working Group Progress Report 2016|2016 Progress Report]]
== Milestones ==
=== Achieved ===
* Adopt ddR as a prototype for a standard API for distributed computing in R
=== 2016 Internship ===
Clark Fitzgerald, a PhD student in the UC Davis Statistics department, worked on ddR and Spark integration.
* Wrote [https://github.com/clarkfitzg/sparklite sparklite] and [https://github.com/clarkfitzg/rddlist rddlist] as minimal proof-of-concept R packages to connect and store general data on Spark. [https://docs.google.com/presentation/d/1WfUQ2ockNku90GWMXonEhUEcVOWcgBmWwt5uYSSBYPY/edit?usp=sharing slides]
* [https://issues.apache.org/jira/browse/SPARK-16785 Patched SparkR] to allow user defined functions returning binary columns. This allows implementation of different data structures in SparkR.
* Updated [https://github.com/vertica/ddR/wiki/Design design documents] with suggested changes to DDR's internal design and object oriented model.
* Improved [https://github.com/vertica/ddR/pull/15 testing and ddR internals].
=== Outstanding ===
* Agree on a final standard API for distributed computing in R
* Implement at least one scalable backend based on an open-source technology like Spark, SQL, etc
== Open Questions ==
* How can we address the needs of both the end user data scientists and the algorithm implementers?
* How should we share data between R and a system like Spark?
* Is there any way to unify SparkR and sparklyr?
* Could we use the abstractions of tensorflow to partially or fully integrate with platforms like Spark?
== Minutes ==
=== 05/11/2017 ===
* Hernik presented his work on “Futures in R: Atomic Building Blocks for Asynchronous Evaluation”
** Future is an abstraction for a value that will be available later
** Futures are in one of the two states— resolved or unresolved
** Syntax for explicit future:
f<- future(expo)
v<-value(f)
** There are many ways to resolve futures. E.g., sequential, multicore, multisession, etc.
** Aim is to make the future package “write once, run anywhere”. It works on all platforms.
** In future api, the developer decides what to parallelize and the user decides how to
** future.Batchjobs a future API on top of BatchJobs (map-reduce API for HPC schedulers)
** Hernik presented an example of using DNA-sequence files with the future API.
** Futures can be nested. E.g., one can create a future in an outer loop and create more futures within the inner loop.
** The plan() function can be used with futures to decide whether the futures are run in parallel or sequentially.
** The doFuture() package is a foreach adaptor. It allows foreach to utilize HPC clusters.
=== 03/09/2017 ===
* Talk by Brian Lewis
** Has created a Github page with notes on Singularity.
** Singularity is a container technology for HPC applications
** No daemon. Minimum virtualization to get application running. Light weight and has very low overheads.
** Used widely in supercomputers
** All distributed computing platforms even with R skins are difficult to use.
** Containers make it much easier to abstract away the long tail of software dependencies and focus on R
** Demonstrated an example of using Singularity with Tensorflow
** Tried MPI and dopar on the 1000Genome data
** The program parses the variant data and stores chunks as files. Then ran principal components on each file.
** Overload matrix operations to use foreach/MPI underneath.
** Overall: Use existing R operators and overloading them with the appropriate backend.
** Will spend time working on Tensorflow, e.g., take a number of algorithms such as PCA and write them on top of Tensorflow using existing R primitives.
=== 12/08/2016 ===
* Yuan Tang from Uptake was the presenter
** Michael and Indrajit will write a status report for the working group sometime in December or January
** Yuan gave an overview of TensorFlow
** JJ, Dirk and Yuan are working on R layer for TensorFlow
** TensorFlow is a platform for machine learning as well as other computations (even math proofs).
** It is GPU optimized and distributed.
** It is used in search, speech recognition, Google photos, etc.
** TensorFlow computations are directed graphs. odes are operations and edges are tensors.
** A lot of array, matrix, etc. operations are available
** Backend is mostly C++. Python front end exists.
** TensorFlow R is based on the python fronted
** In multi-device setting, TensorFlow figures out which devices to use and manages communication between devices.
** Computations are fault tolerant
** Yuan has previously worked on Scikit Flow which is now TF.Learn. It’s a easy transition for Scikit learn users.
** Yuan gave a brief overview of the python interface
** TensorFlow in R handles conversion between R and Python. Syntax is very similar to python API
** Future work: Adding more examples and tutorials, integration with Kubernetes/Marathon like framework.
** During the Q/A there were questions related to whether R kernels can be supported in TensorFlow, and whether R dataframes are a natural wrapper for TensorFlow objects.
=== 11/10/2016 ===
* SparkR slides were presented by Hossein Faliki and Shivaram from Databricks and UC Berkeley:
** SparkR was a prototype from AMPLab (2014). Initially it had the RDD API and was similar to PySpark API
** In 2015, the merge with upstream Spark, the decision was made to integrate with the Dataframe API, and hide the RDD API
** In 2016 more MLLib algorithms have been integrated and new APIs have been added. A CRAN package will be released soon
** Original SparkR architecture runs R on the master that communicates with the JVM processes in the driver. the driver sends commands to the worker JVM processes, and executes them as scala/java statements.
** The system can read distributed data inside the JVM from different sources such as S3, HDFS, etc.
** The driver has a socket based connection between SparkR and the RBackend. RBackend runs on the JVM, deserializes the R code, and converts the R statements into Java calls.
** collect() and createDataFrame() are used to move data between R and JVM processes. createDataFrame will convert your local R data into a JVM based distributed data frame.
** The API has IO, Caching, MLLib, and SQL related commands
** Since Spark 2.0, we can run R processes inside the JVM worker processes. There is no need to keep long running R processes.
** There are 3 UDF functions (1) lapply, runs function on different value of a list (2) dapply, runs function on each partition of a data frame. You have to careful about how data is partitioned, and (3) gapply, performs a grouping on different column names and then runs the function on each group.
** The new CRAN package install.spark() will automatically download and install Spark. Automated CRAN checks have been added to every commit to the code. Should be available with Spark 2.1.0
* Q/A
** Currently trying to get zero copy dataframe between python and Spark. Spark 2.0 has an off heap manager that uses Arrow. Once this feature is tested on the Python API, the next step will be integration R.
** Spark dataframes gain from plan optimizations. It is not SparkR specific. R UDFs are still treated as black boxes by the optimizer
** Spark doesn't directly support matrixes. There is no immediate intent to do so either. One can store an array or vector as a single column of a Spark dataframe.
=== 10/13/2016 ===
''Detailed minutes were not taken for this meeting''
* Mario Inchiosa: Microsoft's perspective on distributed computing with R
** Microsoft R Server: abstractions and algorithms for distributed computation on top of open-source R
** Desired features of a distributed API like ddR:
*** Supports PEMA (initialize, processData, updateResults, processResults)
*** Cross-platform
*** Fast runtime
*** Supports algorithm writer and data scientist
*** Comes with a comprehensive set of algorithms
*** Easy deployment
** ddR is making good progress but does not yet meet those requirements
* Indrajit: ddR progress report and next steps
** Recap of Clark's internship
** Next step: implement some of Clark's design suggestions: https://github.com/vertica/ddR/wiki/Design
** Spark integration will be based on sparklyr
** Should we limit Spark interaction to the DataFrame API or directly interact with RDDs?
*** Consensus: will likely need flexibility of RDDs to implement everything we need, e.g., arrays and lists
** Clark and Javier raised concerns about the scalability of sharing data between R and Spark
*** Michael: Spark is a platform in its own right, so interoperability is important, should figure something out
*** Bryan Lewis: Why not use tensor abstraction from tensorflow? Spark supports tensorflow and an R interface is already in the works.
** Michael raised the issue of additional funding from the R Consortium to continue Clark's work
*** Joe Rickert suggested that the working group develop one or more white papers summarizing the findings of the working group for presentation to the Infrastructure Steering Committee.
*** Consensus was in favor of this, and several pointed out that the progress so far has been worthwhile, despite not meeting the specific goals laid out in the proposal.
* Michael: do we want to invite some external speakers, one per meeting, from groups like databricks, tensorflow, etc?
** Consensus was in favor.
=== 9/8/2016 ===
''Detailed minutes were not taken for this meeting''
* Clark Fitzgerald: internship report
** Developed two packages for low-level Spark integration: rddlist, sparklite
** Patched a bug in Spark
** ddR needs refactoring before Spark integration is feasible:
*** dlist, dframe, and darray should be formal classes.
*** Partitions of data should be represented by a distributed list abstraction, and most functions (e.g., dmapply) should be implemented on top of that list.
* Javier: sparklyr update
** Preparing for CRAN release
** Mario: what happened to sparkapi?
*** Javier: sparkapi has been merged into sparklyr in order to avoid overhead of maintaining two packages. ddR can do everything it needs with sparklyr.
* Luke Tierney: Update on the low-level vector abstraction, which might support interfaces like ddR and sparklyr.
** Overall approach seems feasible, but still working out a few details.
** Will land in a branch soon.
* Bernd Bischl: update on the batchtools package
** Successor to BatchJobs based on in-memory database
=== 8/11/2016 ===
''Meeting was canceled due to lack of availability.''
=== 7/14/2016 ===
* Introduced Clark who is the intern funded by R Consortium. Clark is a graduate student from UC Davis. He will work on ddR integration with Spark and improving the core ddR API as well such as adding a distributed apply() for matrices, split function, etc.
* Bernd: Can I play around with ddR now? What backend should I use? How robust is the code?
** Clark: It's in good enough shape to be played around with. We will continue to improve it. Hopefully the spark integration will be done before the end of my internship in September.
* Q: Is anyone working on using ddR to make ML scale better.
** Indrajit: We have kmeans, glm, etc. already in CRAN.
** Michael Kane: We are working on glmnet and other packages related to algorithm development.
* Javier gave a demo of sparklyr and sparkapi.
** Motivation for the pacakage: The SparkR package overrides the dplyr interface. This is an issue for RStudio. SparkR is not a CRAN package which makes it difficult to add changes. dplyr is the most popular tool by RStudio and is broken on SparkR.
** Sparklyr provides a dplyr interface. It will also support ML like interfaces, such as consuming a ML model.
** Sparklyr does not currently support any distributed computing features. Instead we can recommend ddR as the distributed computing framework on top of sparkapi. We will put the code in CRAN in a couple of weeks.
** Simon: Can you talk more about the wrapper/low level API to work with Spark?
*** Javier: The package underneath the cover is called "sparkapi" it is to be used by pacakge builders. "spark_context()" and "invoke()" are the functionality to call scala methods. It does not you to currently run R user defined functions. I am currently working on enabling that feature. Depending upon the interest in using ddR with sparkapi, I can spend more time to make sparkapi feature rich.
** Indrajit: What versions of Spark are supported
*** Javier: Anything after 1.6
** Bernd: How do you export data?
*** Javier: We are using all the code from SparkR. So everything in SparkR should continue to work. We don't need to change SparkR. We just need to maintain compatibility.
** Bernd: What happens when the RDDs are very large?
*** Javier: Spark will spill on disk.
* Michael Kane: Presented examples that he implemented on ddR.
** Talked about how the different distributed packages compare to each other in terms of functionality.
** Michael K. looked at glm and truncated SVD on ddR. Was able to implement irls on ddR by implementing two distributed functions such as "cross". In truncated SVD only needed to overload two different distributed multiplications.
** Ran these algorithms on the 1000 genome dataset.
** Overall liked ddR since it was easy to implement the algorithms in the package.
** New ideas:
*** Trying to separate the data layer from the execution layer
*** Create an API that works on "chunks" (which is similar to the "parts" API in ddR). Would like to add these APIs to ddR.
*** Indrajit: You should be able to get some of the chunk like features by using parts and dmapply. E.g., you can call dmapply to read 10 different files, which correspond 10 chunks now. These are however wrapped as a darray or dframe. But you can continue to work on the individual chunks by using parts(i).
=== 6/2/2016 ===
* Round table introduction
* (Michael) Goals for the group:
** Make a common abstraction/interfaces to make it easier to work with distributed data and R
** Unify the interface
** Working group will run for a year. Get an API defined, get at least one open source reference implementations
** not everyone needs to work hands on. We will create smaller groups to focus on those aspects.
** We tried to get a diverse group of participants
* Logistics: meet monthly, focus groups may meet more often
* R Consoritum may be able to figure ways to fund smaller projects that come out of the working group
* Michael Kane: Should we start with an inventory of what is available and people are using?
** Michael Lawrence: Yes, we should find the collection of tools as well as the use cases that are common.
** Joe: I will figure out a wiki space.
* Javier: Who are the end users? Simon: Common layer needed to get algorithms working. We started from algos and tried to find the minimal common api. One of the goals is to make sure everyone is on the same page and not trying to create his/her own custom interface.
* Javier: Should we try to get people with more algo expertise?
* Joe: Simon do you have a stack diagram?
* Simon: Can we get R Consortium to help write things up and draw things?
* Next meeting: Javier is going to present SparkR next time.
d8f5f2d4412907360eb12a682d710b9e0fbb9f31
77
74
2017-05-12T02:38:53Z
Indrajit roy
14
wikitext
text/x-wiki
== Goals and Purpose ==
The Distributed Computing Working Group will endorse the design of a common abstraction for distributed data structures in R. We aim to have at least one open-source implementation, as well as a SQL implementation, released within a year of forming the group.
== Members ==
* '''Michael Lawrence''' (Genentech)
* '''Indrajit Roy''' (Google)
* ''Joe Rickert'' (ISC liason, RStudio)
* Bernd Bischl (LMU)
* Matt Dowle (H2O)
* Mario Inchiosa (Microsoft)
* Michael Kane (Yale)
* Javier Luraschi (RStudio)
* Edward Ma (HP Enterprise)
* Luke Tierney (University of Iowa)
* Simon Urbanek (AT&T)
* Bryan Lewis (Paradigm4)
* Hossein Falaki (databricks)
== Reports ==
[[Distributed Computing Working Group Progress Report 2016|2016 Progress Report]]
== Milestones ==
=== Achieved ===
* Adopt ddR as a prototype for a standard API for distributed computing in R
=== 2016 Internship ===
Clark Fitzgerald, a PhD student in the UC Davis Statistics department, worked on ddR and Spark integration.
* Wrote [https://github.com/clarkfitzg/sparklite sparklite] and [https://github.com/clarkfitzg/rddlist rddlist] as minimal proof-of-concept R packages to connect and store general data on Spark. [https://docs.google.com/presentation/d/1WfUQ2ockNku90GWMXonEhUEcVOWcgBmWwt5uYSSBYPY/edit?usp=sharing slides]
* [https://issues.apache.org/jira/browse/SPARK-16785 Patched SparkR] to allow user defined functions returning binary columns. This allows implementation of different data structures in SparkR.
* Updated [https://github.com/vertica/ddR/wiki/Design design documents] with suggested changes to DDR's internal design and object oriented model.
* Improved [https://github.com/vertica/ddR/pull/15 testing and ddR internals].
=== Outstanding ===
* Agree on a final standard API for distributed computing in R
* Implement at least one scalable backend based on an open-source technology like Spark, SQL, etc
== Open Questions ==
* How can we address the needs of both the end user data scientists and the algorithm implementers?
* How should we share data between R and a system like Spark?
* Is there any way to unify SparkR and sparklyr?
* Could we use the abstractions of tensorflow to partially or fully integrate with platforms like Spark?
== Minutes ==
=== 05/11/2017 ===
* Hernik presented his work on “Futures in R: Atomic Building Blocks for Asynchronous Evaluation”
** Future is an abstraction for a value that will be available later
** Futures are in one of the two states— resolved or unresolved
** Syntax for explicit future:
f<- future(expo)
v<-value(f)
** There are many ways to resolve futures. E.g., sequential, multicore, multisession, etc.
** Aim is to make the future package “write once, run anywhere”. It works on all platforms.
** In future api, the developer decides what to parallelize and the user decides how to
** future.Batchjobs a future API on top of BatchJobs (map-reduce API for HPC schedulers)
** Hernik presented an example of using DNA-sequence files with the future API.
** Futures can be nested. E.g., one can create a future in an outer loop and create more futures within the inner loop.
** The plan() function can be used with futures to decide whether the futures are run in parallel or sequentially.
** The doFuture() package is a foreach adaptor. It allows foreach to utilize HPC clusters.
=== 03/09/2017 ===
* Talk by Brian Lewis
** Has created a Github page with notes on Singularity.
** Singularity is a container technology for HPC applications
** No daemon. Minimum virtualization to get application running. Light weight and has very low overheads.
** Used widely in supercomputers
** All distributed computing platforms even with R skins are difficult to use.
** Containers make it much easier to abstract away the long tail of software dependencies and focus on R
** Demonstrated an example of using Singularity with Tensorflow
** Tried MPI and dopar on the 1000Genome data
** The program parses the variant data and stores chunks as files. Then ran principal components on each file.
** Overload matrix operations to use foreach/MPI underneath.
** Overall: Use existing R operators and overloading them with the appropriate backend.
** Will spend time working on Tensorflow, e.g., take a number of algorithms such as PCA and write them on top of Tensorflow using existing R primitives.
=== 12/08/2016 ===
* Yuan Tang from Uptake was the presenter
** Michael and Indrajit will write a status report for the working group sometime in December or January
** Yuan gave an overview of TensorFlow
** JJ, Dirk and Yuan are working on R layer for TensorFlow
** TensorFlow is a platform for machine learning as well as other computations (even math proofs).
** It is GPU optimized and distributed.
** It is used in search, speech recognition, Google photos, etc.
** TensorFlow computations are directed graphs. odes are operations and edges are tensors.
** A lot of array, matrix, etc. operations are available
** Backend is mostly C++. Python front end exists.
** TensorFlow R is based on the python fronted
** In multi-device setting, TensorFlow figures out which devices to use and manages communication between devices.
** Computations are fault tolerant
** Yuan has previously worked on Scikit Flow which is now TF.Learn. It’s a easy transition for Scikit learn users.
** Yuan gave a brief overview of the python interface
** TensorFlow in R handles conversion between R and Python. Syntax is very similar to python API
** Future work: Adding more examples and tutorials, integration with Kubernetes/Marathon like framework.
** During the Q/A there were questions related to whether R kernels can be supported in TensorFlow, and whether R dataframes are a natural wrapper for TensorFlow objects.
=== 11/10/2016 ===
* SparkR slides were presented by Hossein Faliki and Shivaram from Databricks and UC Berkeley:
** SparkR was a prototype from AMPLab (2014). Initially it had the RDD API and was similar to PySpark API
** In 2015, the merge with upstream Spark, the decision was made to integrate with the Dataframe API, and hide the RDD API
** In 2016 more MLLib algorithms have been integrated and new APIs have been added. A CRAN package will be released soon
** Original SparkR architecture runs R on the master that communicates with the JVM processes in the driver. the driver sends commands to the worker JVM processes, and executes them as scala/java statements.
** The system can read distributed data inside the JVM from different sources such as S3, HDFS, etc.
** The driver has a socket based connection between SparkR and the RBackend. RBackend runs on the JVM, deserializes the R code, and converts the R statements into Java calls.
** collect() and createDataFrame() are used to move data between R and JVM processes. createDataFrame will convert your local R data into a JVM based distributed data frame.
** The API has IO, Caching, MLLib, and SQL related commands
** Since Spark 2.0, we can run R processes inside the JVM worker processes. There is no need to keep long running R processes.
** There are 3 UDF functions (1) lapply, runs function on different value of a list (2) dapply, runs function on each partition of a data frame. You have to careful about how data is partitioned, and (3) gapply, performs a grouping on different column names and then runs the function on each group.
** The new CRAN package install.spark() will automatically download and install Spark. Automated CRAN checks have been added to every commit to the code. Should be available with Spark 2.1.0
* Q/A
** Currently trying to get zero copy dataframe between python and Spark. Spark 2.0 has an off heap manager that uses Arrow. Once this feature is tested on the Python API, the next step will be integration R.
** Spark dataframes gain from plan optimizations. It is not SparkR specific. R UDFs are still treated as black boxes by the optimizer
** Spark doesn't directly support matrixes. There is no immediate intent to do so either. One can store an array or vector as a single column of a Spark dataframe.
=== 10/13/2016 ===
''Detailed minutes were not taken for this meeting''
* Mario Inchiosa: Microsoft's perspective on distributed computing with R
** Microsoft R Server: abstractions and algorithms for distributed computation on top of open-source R
** Desired features of a distributed API like ddR:
*** Supports PEMA (initialize, processData, updateResults, processResults)
*** Cross-platform
*** Fast runtime
*** Supports algorithm writer and data scientist
*** Comes with a comprehensive set of algorithms
*** Easy deployment
** ddR is making good progress but does not yet meet those requirements
* Indrajit: ddR progress report and next steps
** Recap of Clark's internship
** Next step: implement some of Clark's design suggestions: https://github.com/vertica/ddR/wiki/Design
** Spark integration will be based on sparklyr
** Should we limit Spark interaction to the DataFrame API or directly interact with RDDs?
*** Consensus: will likely need flexibility of RDDs to implement everything we need, e.g., arrays and lists
** Clark and Javier raised concerns about the scalability of sharing data between R and Spark
*** Michael: Spark is a platform in its own right, so interoperability is important, should figure something out
*** Bryan Lewis: Why not use tensor abstraction from tensorflow? Spark supports tensorflow and an R interface is already in the works.
** Michael raised the issue of additional funding from the R Consortium to continue Clark's work
*** Joe Rickert suggested that the working group develop one or more white papers summarizing the findings of the working group for presentation to the Infrastructure Steering Committee.
*** Consensus was in favor of this, and several pointed out that the progress so far has been worthwhile, despite not meeting the specific goals laid out in the proposal.
* Michael: do we want to invite some external speakers, one per meeting, from groups like databricks, tensorflow, etc?
** Consensus was in favor.
=== 9/8/2016 ===
''Detailed minutes were not taken for this meeting''
* Clark Fitzgerald: internship report
** Developed two packages for low-level Spark integration: rddlist, sparklite
** Patched a bug in Spark
** ddR needs refactoring before Spark integration is feasible:
*** dlist, dframe, and darray should be formal classes.
*** Partitions of data should be represented by a distributed list abstraction, and most functions (e.g., dmapply) should be implemented on top of that list.
* Javier: sparklyr update
** Preparing for CRAN release
** Mario: what happened to sparkapi?
*** Javier: sparkapi has been merged into sparklyr in order to avoid overhead of maintaining two packages. ddR can do everything it needs with sparklyr.
* Luke Tierney: Update on the low-level vector abstraction, which might support interfaces like ddR and sparklyr.
** Overall approach seems feasible, but still working out a few details.
** Will land in a branch soon.
* Bernd Bischl: update on the batchtools package
** Successor to BatchJobs based on in-memory database
=== 8/11/2016 ===
''Meeting was canceled due to lack of availability.''
=== 7/14/2016 ===
* Introduced Clark who is the intern funded by R Consortium. Clark is a graduate student from UC Davis. He will work on ddR integration with Spark and improving the core ddR API as well such as adding a distributed apply() for matrices, split function, etc.
* Bernd: Can I play around with ddR now? What backend should I use? How robust is the code?
** Clark: It's in good enough shape to be played around with. We will continue to improve it. Hopefully the spark integration will be done before the end of my internship in September.
* Q: Is anyone working on using ddR to make ML scale better.
** Indrajit: We have kmeans, glm, etc. already in CRAN.
** Michael Kane: We are working on glmnet and other packages related to algorithm development.
* Javier gave a demo of sparklyr and sparkapi.
** Motivation for the pacakage: The SparkR package overrides the dplyr interface. This is an issue for RStudio. SparkR is not a CRAN package which makes it difficult to add changes. dplyr is the most popular tool by RStudio and is broken on SparkR.
** Sparklyr provides a dplyr interface. It will also support ML like interfaces, such as consuming a ML model.
** Sparklyr does not currently support any distributed computing features. Instead we can recommend ddR as the distributed computing framework on top of sparkapi. We will put the code in CRAN in a couple of weeks.
** Simon: Can you talk more about the wrapper/low level API to work with Spark?
*** Javier: The package underneath the cover is called "sparkapi" it is to be used by pacakge builders. "spark_context()" and "invoke()" are the functionality to call scala methods. It does not you to currently run R user defined functions. I am currently working on enabling that feature. Depending upon the interest in using ddR with sparkapi, I can spend more time to make sparkapi feature rich.
** Indrajit: What versions of Spark are supported
*** Javier: Anything after 1.6
** Bernd: How do you export data?
*** Javier: We are using all the code from SparkR. So everything in SparkR should continue to work. We don't need to change SparkR. We just need to maintain compatibility.
** Bernd: What happens when the RDDs are very large?
*** Javier: Spark will spill on disk.
* Michael Kane: Presented examples that he implemented on ddR.
** Talked about how the different distributed packages compare to each other in terms of functionality.
** Michael K. looked at glm and truncated SVD on ddR. Was able to implement irls on ddR by implementing two distributed functions such as "cross". In truncated SVD only needed to overload two different distributed multiplications.
** Ran these algorithms on the 1000 genome dataset.
** Overall liked ddR since it was easy to implement the algorithms in the package.
** New ideas:
*** Trying to separate the data layer from the execution layer
*** Create an API that works on "chunks" (which is similar to the "parts" API in ddR). Would like to add these APIs to ddR.
*** Indrajit: You should be able to get some of the chunk like features by using parts and dmapply. E.g., you can call dmapply to read 10 different files, which correspond 10 chunks now. These are however wrapped as a darray or dframe. But you can continue to work on the individual chunks by using parts(i).
=== 6/2/2016 ===
* Round table introduction
* (Michael) Goals for the group:
** Make a common abstraction/interfaces to make it easier to work with distributed data and R
** Unify the interface
** Working group will run for a year. Get an API defined, get at least one open source reference implementations
** not everyone needs to work hands on. We will create smaller groups to focus on those aspects.
** We tried to get a diverse group of participants
* Logistics: meet monthly, focus groups may meet more often
* R Consoritum may be able to figure ways to fund smaller projects that come out of the working group
* Michael Kane: Should we start with an inventory of what is available and people are using?
** Michael Lawrence: Yes, we should find the collection of tools as well as the use cases that are common.
** Joe: I will figure out a wiki space.
* Javier: Who are the end users? Simon: Common layer needed to get algorithms working. We started from algos and tried to find the minimal common api. One of the goals is to make sure everyone is on the same page and not trying to create his/her own custom interface.
* Javier: Should we try to get people with more algo expertise?
* Joe: Simon do you have a stack diagram?
* Simon: Can we get R Consortium to help write things up and draw things?
* Next meeting: Javier is going to present SparkR next time.
58ef9ccaef0675f275da91892e9e9f8c2e5c5aec
74
72
2017-05-01T18:42:48Z
MichaelLawrence
9
wikitext
text/x-wiki
== Goals and Purpose ==
The Distributed Computing Working Group will endorse the design of a common abstraction for distributed data structures in R. We aim to have at least one open-source implementation, as well as a SQL implementation, released within a year of forming the group.
== Members ==
* '''Michael Lawrence''' (Genentech)
* '''Indrajit Roy''' (HP Enterprise)
* ''Joe Rickert'' (ISC liason, RStudio)
* Bernd Bischl (LMU)
* Matt Dowle (H2O)
* Mario Inchiosa (Microsoft)
* Michael Kane (Yale)
* Javier Luraschi (RStudio)
* Edward Ma (HP Enterprise)
* Luke Tierney (University of Iowa)
* Simon Urbanek (AT&T)
* Bryan Lewis (Paradigm4)
* Hossein Falaki (databricks)
== Reports ==
[[Distributed Computing Working Group Progress Report 2016|2016 Progress Report]]
== Milestones ==
=== Achieved ===
* Adopt ddR as a prototype for a standard API for distributed computing in R
=== 2016 Internship ===
Clark Fitzgerald, a PhD student in the UC Davis Statistics department, worked on ddR and Spark integration.
* Wrote [https://github.com/clarkfitzg/sparklite sparklite] and [https://github.com/clarkfitzg/rddlist rddlist] as minimal proof-of-concept R packages to connect and store general data on Spark. [https://docs.google.com/presentation/d/1WfUQ2ockNku90GWMXonEhUEcVOWcgBmWwt5uYSSBYPY/edit?usp=sharing slides]
* [https://issues.apache.org/jira/browse/SPARK-16785 Patched SparkR] to allow user defined functions returning binary columns. This allows implementation of different data structures in SparkR.
* Updated [https://github.com/vertica/ddR/wiki/Design design documents] with suggested changes to DDR's internal design and object oriented model.
* Improved [https://github.com/vertica/ddR/pull/15 testing and ddR internals].
=== Outstanding ===
* Agree on a final standard API for distributed computing in R
* Implement at least one scalable backend based on an open-source technology like Spark, SQL, etc
== Open Questions ==
* How can we address the needs of both the end user data scientists and the algorithm implementers?
* How should we share data between R and a system like Spark?
* Is there any way to unify SparkR and sparklyr?
* Could we use the abstractions of tensorflow to partially or fully integrate with platforms like Spark?
== Minutes ==
=== 03/09/2017 ===
* Talk by Brian Lewis
** Has created a Github page with notes on Singularity.
** Singularity is a container technology for HPC applications
** No daemon. Minimum virtualization to get application running. Light weight and has very low overheads.
** Used widely in supercomputers
** All distributed computing platforms even with R skins are difficult to use.
** Containers make it much easier to abstract away the long tail of software dependencies and focus on R
** Demonstrated an example of using Singularity with Tensorflow
** Tried MPI and dopar on the 1000Genome data
** The program parses the variant data and stores chunks as files. Then ran principal components on each file.
** Overload matrix operations to use foreach/MPI underneath.
** Overall: Use existing R operators and overloading them with the appropriate backend.
** Will spend time working on Tensorflow, e.g., take a number of algorithms such as PCA and write them on top of Tensorflow using existing R primitives.
=== 12/08/2016 ===
* Yuan Tang from Uptake was the presenter
** Michael and Indrajit will write a status report for the working group sometime in December or January
** Yuan gave an overview of TensorFlow
** JJ, Dirk and Yuan are working on R layer for TensorFlow
** TensorFlow is a platform for machine learning as well as other computations (even math proofs).
** It is GPU optimized and distributed.
** It is used in search, speech recognition, Google photos, etc.
** TensorFlow computations are directed graphs. odes are operations and edges are tensors.
** A lot of array, matrix, etc. operations are available
** Backend is mostly C++. Python front end exists.
** TensorFlow R is based on the python fronted
** In multi-device setting, TensorFlow figures out which devices to use and manages communication between devices.
** Computations are fault tolerant
** Yuan has previously worked on Scikit Flow which is now TF.Learn. It’s a easy transition for Scikit learn users.
** Yuan gave a brief overview of the python interface
** TensorFlow in R handles conversion between R and Python. Syntax is very similar to python API
** Future work: Adding more examples and tutorials, integration with Kubernetes/Marathon like framework.
** During the Q/A there were questions related to whether R kernels can be supported in TensorFlow, and whether R dataframes are a natural wrapper for TensorFlow objects.
=== 11/10/2016 ===
* SparkR slides were presented by Hossein Faliki and Shivaram from Databricks and UC Berkeley:
** SparkR was a prototype from AMPLab (2014). Initially it had the RDD API and was similar to PySpark API
** In 2015, the merge with upstream Spark, the decision was made to integrate with the Dataframe API, and hide the RDD API
** In 2016 more MLLib algorithms have been integrated and new APIs have been added. A CRAN package will be released soon
** Original SparkR architecture runs R on the master that communicates with the JVM processes in the driver. the driver sends commands to the worker JVM processes, and executes them as scala/java statements.
** The system can read distributed data inside the JVM from different sources such as S3, HDFS, etc.
** The driver has a socket based connection between SparkR and the RBackend. RBackend runs on the JVM, deserializes the R code, and converts the R statements into Java calls.
** collect() and createDataFrame() are used to move data between R and JVM processes. createDataFrame will convert your local R data into a JVM based distributed data frame.
** The API has IO, Caching, MLLib, and SQL related commands
** Since Spark 2.0, we can run R processes inside the JVM worker processes. There is no need to keep long running R processes.
** There are 3 UDF functions (1) lapply, runs function on different value of a list (2) dapply, runs function on each partition of a data frame. You have to careful about how data is partitioned, and (3) gapply, performs a grouping on different column names and then runs the function on each group.
** The new CRAN package install.spark() will automatically download and install Spark. Automated CRAN checks have been added to every commit to the code. Should be available with Spark 2.1.0
* Q/A
** Currently trying to get zero copy dataframe between python and Spark. Spark 2.0 has an off heap manager that uses Arrow. Once this feature is tested on the Python API, the next step will be integration R.
** Spark dataframes gain from plan optimizations. It is not SparkR specific. R UDFs are still treated as black boxes by the optimizer
** Spark doesn't directly support matrixes. There is no immediate intent to do so either. One can store an array or vector as a single column of a Spark dataframe.
=== 10/13/2016 ===
''Detailed minutes were not taken for this meeting''
* Mario Inchiosa: Microsoft's perspective on distributed computing with R
** Microsoft R Server: abstractions and algorithms for distributed computation on top of open-source R
** Desired features of a distributed API like ddR:
*** Supports PEMA (initialize, processData, updateResults, processResults)
*** Cross-platform
*** Fast runtime
*** Supports algorithm writer and data scientist
*** Comes with a comprehensive set of algorithms
*** Easy deployment
** ddR is making good progress but does not yet meet those requirements
* Indrajit: ddR progress report and next steps
** Recap of Clark's internship
** Next step: implement some of Clark's design suggestions: https://github.com/vertica/ddR/wiki/Design
** Spark integration will be based on sparklyr
** Should we limit Spark interaction to the DataFrame API or directly interact with RDDs?
*** Consensus: will likely need flexibility of RDDs to implement everything we need, e.g., arrays and lists
** Clark and Javier raised concerns about the scalability of sharing data between R and Spark
*** Michael: Spark is a platform in its own right, so interoperability is important, should figure something out
*** Bryan Lewis: Why not use tensor abstraction from tensorflow? Spark supports tensorflow and an R interface is already in the works.
** Michael raised the issue of additional funding from the R Consortium to continue Clark's work
*** Joe Rickert suggested that the working group develop one or more white papers summarizing the findings of the working group for presentation to the Infrastructure Steering Committee.
*** Consensus was in favor of this, and several pointed out that the progress so far has been worthwhile, despite not meeting the specific goals laid out in the proposal.
* Michael: do we want to invite some external speakers, one per meeting, from groups like databricks, tensorflow, etc?
** Consensus was in favor.
=== 9/8/2016 ===
''Detailed minutes were not taken for this meeting''
* Clark Fitzgerald: internship report
** Developed two packages for low-level Spark integration: rddlist, sparklite
** Patched a bug in Spark
** ddR needs refactoring before Spark integration is feasible:
*** dlist, dframe, and darray should be formal classes.
*** Partitions of data should be represented by a distributed list abstraction, and most functions (e.g., dmapply) should be implemented on top of that list.
* Javier: sparklyr update
** Preparing for CRAN release
** Mario: what happened to sparkapi?
*** Javier: sparkapi has been merged into sparklyr in order to avoid overhead of maintaining two packages. ddR can do everything it needs with sparklyr.
* Luke Tierney: Update on the low-level vector abstraction, which might support interfaces like ddR and sparklyr.
** Overall approach seems feasible, but still working out a few details.
** Will land in a branch soon.
* Bernd Bischl: update on the batchtools package
** Successor to BatchJobs based on in-memory database
=== 8/11/2016 ===
''Meeting was canceled due to lack of availability.''
=== 7/14/2016 ===
* Introduced Clark who is the intern funded by R Consortium. Clark is a graduate student from UC Davis. He will work on ddR integration with Spark and improving the core ddR API as well such as adding a distributed apply() for matrices, split function, etc.
* Bernd: Can I play around with ddR now? What backend should I use? How robust is the code?
** Clark: It's in good enough shape to be played around with. We will continue to improve it. Hopefully the spark integration will be done before the end of my internship in September.
* Q: Is anyone working on using ddR to make ML scale better.
** Indrajit: We have kmeans, glm, etc. already in CRAN.
** Michael Kane: We are working on glmnet and other packages related to algorithm development.
* Javier gave a demo of sparklyr and sparkapi.
** Motivation for the pacakage: The SparkR package overrides the dplyr interface. This is an issue for RStudio. SparkR is not a CRAN package which makes it difficult to add changes. dplyr is the most popular tool by RStudio and is broken on SparkR.
** Sparklyr provides a dplyr interface. It will also support ML like interfaces, such as consuming a ML model.
** Sparklyr does not currently support any distributed computing features. Instead we can recommend ddR as the distributed computing framework on top of sparkapi. We will put the code in CRAN in a couple of weeks.
** Simon: Can you talk more about the wrapper/low level API to work with Spark?
*** Javier: The package underneath the cover is called "sparkapi" it is to be used by pacakge builders. "spark_context()" and "invoke()" are the functionality to call scala methods. It does not you to currently run R user defined functions. I am currently working on enabling that feature. Depending upon the interest in using ddR with sparkapi, I can spend more time to make sparkapi feature rich.
** Indrajit: What versions of Spark are supported
*** Javier: Anything after 1.6
** Bernd: How do you export data?
*** Javier: We are using all the code from SparkR. So everything in SparkR should continue to work. We don't need to change SparkR. We just need to maintain compatibility.
** Bernd: What happens when the RDDs are very large?
*** Javier: Spark will spill on disk.
* Michael Kane: Presented examples that he implemented on ddR.
** Talked about how the different distributed packages compare to each other in terms of functionality.
** Michael K. looked at glm and truncated SVD on ddR. Was able to implement irls on ddR by implementing two distributed functions such as "cross". In truncated SVD only needed to overload two different distributed multiplications.
** Ran these algorithms on the 1000 genome dataset.
** Overall liked ddR since it was easy to implement the algorithms in the package.
** New ideas:
*** Trying to separate the data layer from the execution layer
*** Create an API that works on "chunks" (which is similar to the "parts" API in ddR). Would like to add these APIs to ddR.
*** Indrajit: You should be able to get some of the chunk like features by using parts and dmapply. E.g., you can call dmapply to read 10 different files, which correspond 10 chunks now. These are however wrapped as a darray or dframe. But you can continue to work on the individual chunks by using parts(i).
=== 6/2/2016 ===
* Round table introduction
* (Michael) Goals for the group:
** Make a common abstraction/interfaces to make it easier to work with distributed data and R
** Unify the interface
** Working group will run for a year. Get an API defined, get at least one open source reference implementations
** not everyone needs to work hands on. We will create smaller groups to focus on those aspects.
** We tried to get a diverse group of participants
* Logistics: meet monthly, focus groups may meet more often
* R Consoritum may be able to figure ways to fund smaller projects that come out of the working group
* Michael Kane: Should we start with an inventory of what is available and people are using?
** Michael Lawrence: Yes, we should find the collection of tools as well as the use cases that are common.
** Joe: I will figure out a wiki space.
* Javier: Who are the end users? Simon: Common layer needed to get algorithms working. We started from algos and tried to find the minimal common api. One of the goals is to make sure everyone is on the same page and not trying to create his/her own custom interface.
* Javier: Should we try to get people with more algo expertise?
* Joe: Simon do you have a stack diagram?
* Simon: Can we get R Consortium to help write things up and draw things?
* Next meeting: Javier is going to present SparkR next time.
a32e7c959b0b686142aee46a5d9f176cacaffff5
72
70
2017-05-01T17:56:12Z
MichaelLawrence
9
wikitext
text/x-wiki
== Goals and Purpose ==
The Distributed Computing Working Group will endorse the design of a common abstraction for distributed data structures in R. We aim to have at least one open-source implementation, as well as a SQL implementation, released within a year of forming the group.
== Members ==
* '''Michael Lawrence''' (Genentech)
* '''Indrajit Roy''' (HP Enterprise)
* ''Joe Rickert'' (ISC liason, RStudio)
* Bernd Bischl (LMU)
* Matt Dowle (H2O)
* Mario Inchiosa (Microsoft)
* Michael Kane (Yale)
* Javier Luraschi (RStudio)
* Edward Ma (HP Enterprise)
* Luke Tierney (University of Iowa)
* Simon Urbanek (AT&T)
* Bryan Lewis (Paradigm4)
* Hossein Falaki (databricks)
== Reports ==
[[ddr2016|2016 Progress Report]]
== Milestones ==
=== Achieved ===
* Adopt ddR as a prototype for a standard API for distributed computing in R
=== 2016 Internship ===
Clark Fitzgerald, a PhD student in the UC Davis Statistics department, worked on ddR and Spark integration.
* Wrote [https://github.com/clarkfitzg/sparklite sparklite] and [https://github.com/clarkfitzg/rddlist rddlist] as minimal proof-of-concept R packages to connect and store general data on Spark. [https://docs.google.com/presentation/d/1WfUQ2ockNku90GWMXonEhUEcVOWcgBmWwt5uYSSBYPY/edit?usp=sharing slides]
* [https://issues.apache.org/jira/browse/SPARK-16785 Patched SparkR] to allow user defined functions returning binary columns. This allows implementation of different data structures in SparkR.
* Updated [https://github.com/vertica/ddR/wiki/Design design documents] with suggested changes to DDR's internal design and object oriented model.
* Improved [https://github.com/vertica/ddR/pull/15 testing and ddR internals].
=== Outstanding ===
* Agree on a final standard API for distributed computing in R
* Implement at least one scalable backend based on an open-source technology like Spark, SQL, etc
== Open Questions ==
* How can we address the needs of both the end user data scientists and the algorithm implementers?
* How should we share data between R and a system like Spark?
* Is there any way to unify SparkR and sparklyr?
* Could we use the abstractions of tensorflow to partially or fully integrate with platforms like Spark?
== Minutes ==
=== 03/09/2017 ===
* Talk by Brian Lewis
** Has created a Github page with notes on Singularity.
** Singularity is a container technology for HPC applications
** No daemon. Minimum virtualization to get application running. Light weight and has very low overheads.
** Used widely in supercomputers
** All distributed computing platforms even with R skins are difficult to use.
** Containers make it much easier to abstract away the long tail of software dependencies and focus on R
** Demonstrated an example of using Singularity with Tensorflow
** Tried MPI and dopar on the 1000Genome data
** The program parses the variant data and stores chunks as files. Then ran principal components on each file.
** Overload matrix operations to use foreach/MPI underneath.
** Overall: Use existing R operators and overloading them with the appropriate backend.
** Will spend time working on Tensorflow, e.g., take a number of algorithms such as PCA and write them on top of Tensorflow using existing R primitives.
=== 12/08/2016 ===
* Yuan Tang from Uptake was the presenter
** Michael and Indrajit will write a status report for the working group sometime in December or January
** Yuan gave an overview of TensorFlow
** JJ, Dirk and Yuan are working on R layer for TensorFlow
** TensorFlow is a platform for machine learning as well as other computations (even math proofs).
** It is GPU optimized and distributed.
** It is used in search, speech recognition, Google photos, etc.
** TensorFlow computations are directed graphs. odes are operations and edges are tensors.
** A lot of array, matrix, etc. operations are available
** Backend is mostly C++. Python front end exists.
** TensorFlow R is based on the python fronted
** In multi-device setting, TensorFlow figures out which devices to use and manages communication between devices.
** Computations are fault tolerant
** Yuan has previously worked on Scikit Flow which is now TF.Learn. It’s a easy transition for Scikit learn users.
** Yuan gave a brief overview of the python interface
** TensorFlow in R handles conversion between R and Python. Syntax is very similar to python API
** Future work: Adding more examples and tutorials, integration with Kubernetes/Marathon like framework.
** During the Q/A there were questions related to whether R kernels can be supported in TensorFlow, and whether R dataframes are a natural wrapper for TensorFlow objects.
=== 11/10/2016 ===
* SparkR slides were presented by Hossein Faliki and Shivaram from Databricks and UC Berkeley:
** SparkR was a prototype from AMPLab (2014). Initially it had the RDD API and was similar to PySpark API
** In 2015, the merge with upstream Spark, the decision was made to integrate with the Dataframe API, and hide the RDD API
** In 2016 more MLLib algorithms have been integrated and new APIs have been added. A CRAN package will be released soon
** Original SparkR architecture runs R on the master that communicates with the JVM processes in the driver. the driver sends commands to the worker JVM processes, and executes them as scala/java statements.
** The system can read distributed data inside the JVM from different sources such as S3, HDFS, etc.
** The driver has a socket based connection between SparkR and the RBackend. RBackend runs on the JVM, deserializes the R code, and converts the R statements into Java calls.
** collect() and createDataFrame() are used to move data between R and JVM processes. createDataFrame will convert your local R data into a JVM based distributed data frame.
** The API has IO, Caching, MLLib, and SQL related commands
** Since Spark 2.0, we can run R processes inside the JVM worker processes. There is no need to keep long running R processes.
** There are 3 UDF functions (1) lapply, runs function on different value of a list (2) dapply, runs function on each partition of a data frame. You have to careful about how data is partitioned, and (3) gapply, performs a grouping on different column names and then runs the function on each group.
** The new CRAN package install.spark() will automatically download and install Spark. Automated CRAN checks have been added to every commit to the code. Should be available with Spark 2.1.0
* Q/A
** Currently trying to get zero copy dataframe between python and Spark. Spark 2.0 has an off heap manager that uses Arrow. Once this feature is tested on the Python API, the next step will be integration R.
** Spark dataframes gain from plan optimizations. It is not SparkR specific. R UDFs are still treated as black boxes by the optimizer
** Spark doesn't directly support matrixes. There is no immediate intent to do so either. One can store an array or vector as a single column of a Spark dataframe.
=== 10/13/2016 ===
''Detailed minutes were not taken for this meeting''
* Mario Inchiosa: Microsoft's perspective on distributed computing with R
** Microsoft R Server: abstractions and algorithms for distributed computation on top of open-source R
** Desired features of a distributed API like ddR:
*** Supports PEMA (initialize, processData, updateResults, processResults)
*** Cross-platform
*** Fast runtime
*** Supports algorithm writer and data scientist
*** Comes with a comprehensive set of algorithms
*** Easy deployment
** ddR is making good progress but does not yet meet those requirements
* Indrajit: ddR progress report and next steps
** Recap of Clark's internship
** Next step: implement some of Clark's design suggestions: https://github.com/vertica/ddR/wiki/Design
** Spark integration will be based on sparklyr
** Should we limit Spark interaction to the DataFrame API or directly interact with RDDs?
*** Consensus: will likely need flexibility of RDDs to implement everything we need, e.g., arrays and lists
** Clark and Javier raised concerns about the scalability of sharing data between R and Spark
*** Michael: Spark is a platform in its own right, so interoperability is important, should figure something out
*** Bryan Lewis: Why not use tensor abstraction from tensorflow? Spark supports tensorflow and an R interface is already in the works.
** Michael raised the issue of additional funding from the R Consortium to continue Clark's work
*** Joe Rickert suggested that the working group develop one or more white papers summarizing the findings of the working group for presentation to the Infrastructure Steering Committee.
*** Consensus was in favor of this, and several pointed out that the progress so far has been worthwhile, despite not meeting the specific goals laid out in the proposal.
* Michael: do we want to invite some external speakers, one per meeting, from groups like databricks, tensorflow, etc?
** Consensus was in favor.
=== 9/8/2016 ===
''Detailed minutes were not taken for this meeting''
* Clark Fitzgerald: internship report
** Developed two packages for low-level Spark integration: rddlist, sparklite
** Patched a bug in Spark
** ddR needs refactoring before Spark integration is feasible:
*** dlist, dframe, and darray should be formal classes.
*** Partitions of data should be represented by a distributed list abstraction, and most functions (e.g., dmapply) should be implemented on top of that list.
* Javier: sparklyr update
** Preparing for CRAN release
** Mario: what happened to sparkapi?
*** Javier: sparkapi has been merged into sparklyr in order to avoid overhead of maintaining two packages. ddR can do everything it needs with sparklyr.
* Luke Tierney: Update on the low-level vector abstraction, which might support interfaces like ddR and sparklyr.
** Overall approach seems feasible, but still working out a few details.
** Will land in a branch soon.
* Bernd Bischl: update on the batchtools package
** Successor to BatchJobs based on in-memory database
=== 8/11/2016 ===
''Meeting was canceled due to lack of availability.''
=== 7/14/2016 ===
* Introduced Clark who is the intern funded by R Consortium. Clark is a graduate student from UC Davis. He will work on ddR integration with Spark and improving the core ddR API as well such as adding a distributed apply() for matrices, split function, etc.
* Bernd: Can I play around with ddR now? What backend should I use? How robust is the code?
** Clark: It's in good enough shape to be played around with. We will continue to improve it. Hopefully the spark integration will be done before the end of my internship in September.
* Q: Is anyone working on using ddR to make ML scale better.
** Indrajit: We have kmeans, glm, etc. already in CRAN.
** Michael Kane: We are working on glmnet and other packages related to algorithm development.
* Javier gave a demo of sparklyr and sparkapi.
** Motivation for the pacakage: The SparkR package overrides the dplyr interface. This is an issue for RStudio. SparkR is not a CRAN package which makes it difficult to add changes. dplyr is the most popular tool by RStudio and is broken on SparkR.
** Sparklyr provides a dplyr interface. It will also support ML like interfaces, such as consuming a ML model.
** Sparklyr does not currently support any distributed computing features. Instead we can recommend ddR as the distributed computing framework on top of sparkapi. We will put the code in CRAN in a couple of weeks.
** Simon: Can you talk more about the wrapper/low level API to work with Spark?
*** Javier: The package underneath the cover is called "sparkapi" it is to be used by pacakge builders. "spark_context()" and "invoke()" are the functionality to call scala methods. It does not you to currently run R user defined functions. I am currently working on enabling that feature. Depending upon the interest in using ddR with sparkapi, I can spend more time to make sparkapi feature rich.
** Indrajit: What versions of Spark are supported
*** Javier: Anything after 1.6
** Bernd: How do you export data?
*** Javier: We are using all the code from SparkR. So everything in SparkR should continue to work. We don't need to change SparkR. We just need to maintain compatibility.
** Bernd: What happens when the RDDs are very large?
*** Javier: Spark will spill on disk.
* Michael Kane: Presented examples that he implemented on ddR.
** Talked about how the different distributed packages compare to each other in terms of functionality.
** Michael K. looked at glm and truncated SVD on ddR. Was able to implement irls on ddR by implementing two distributed functions such as "cross". In truncated SVD only needed to overload two different distributed multiplications.
** Ran these algorithms on the 1000 genome dataset.
** Overall liked ddR since it was easy to implement the algorithms in the package.
** New ideas:
*** Trying to separate the data layer from the execution layer
*** Create an API that works on "chunks" (which is similar to the "parts" API in ddR). Would like to add these APIs to ddR.
*** Indrajit: You should be able to get some of the chunk like features by using parts and dmapply. E.g., you can call dmapply to read 10 different files, which correspond 10 chunks now. These are however wrapped as a darray or dframe. But you can continue to work on the individual chunks by using parts(i).
=== 6/2/2016 ===
* Round table introduction
* (Michael) Goals for the group:
** Make a common abstraction/interfaces to make it easier to work with distributed data and R
** Unify the interface
** Working group will run for a year. Get an API defined, get at least one open source reference implementations
** not everyone needs to work hands on. We will create smaller groups to focus on those aspects.
** We tried to get a diverse group of participants
* Logistics: meet monthly, focus groups may meet more often
* R Consoritum may be able to figure ways to fund smaller projects that come out of the working group
* Michael Kane: Should we start with an inventory of what is available and people are using?
** Michael Lawrence: Yes, we should find the collection of tools as well as the use cases that are common.
** Joe: I will figure out a wiki space.
* Javier: Who are the end users? Simon: Common layer needed to get algorithms working. We started from algos and tried to find the minimal common api. One of the goals is to make sure everyone is on the same page and not trying to create his/her own custom interface.
* Javier: Should we try to get people with more algo expertise?
* Joe: Simon do you have a stack diagram?
* Simon: Can we get R Consortium to help write things up and draw things?
* Next meeting: Javier is going to present SparkR next time.
4ea35212c422459c7fc2721f601e7237076a2b54
70
57
2017-03-09T18:47:58Z
Indrajit roy
14
wikitext
text/x-wiki
== Goals and Purpose ==
The Distributed Computing Working Group will endorse the design of a common abstraction for distributed data structures in R. We aim to have at least one open-source implementation, as well as a SQL implementation, released within a year of forming the group.
== Members ==
* '''Michael Lawrence''' (Genentech)
* '''Indrajit Roy''' (HP Enterprise)
* ''Joe Rickert'' (ISC liason, RStudio)
* Bernd Bischl (LMU)
* Matt Dowle (H2O)
* Mario Inchiosa (Microsoft)
* Michael Kane (Yale)
* Javier Luraschi (RStudio)
* Edward Ma (HP Enterprise)
* Luke Tierney (University of Iowa)
* Simon Urbanek (AT&T)
* Bryan Lewis (Paradigm4)
* Hossein Falaki (databricks)
== Milestones ==
=== Achieved ===
* Adopt ddR as a prototype for a standard API for distributed computing in R
=== 2016 Internship ===
Clark Fitzgerald, a PhD student in the UC Davis Statistics department, worked on ddR and Spark integration.
* Wrote [https://github.com/clarkfitzg/sparklite sparklite] and [https://github.com/clarkfitzg/rddlist rddlist] as minimal proof-of-concept R packages to connect and store general data on Spark. [https://docs.google.com/presentation/d/1WfUQ2ockNku90GWMXonEhUEcVOWcgBmWwt5uYSSBYPY/edit?usp=sharing slides]
* [https://issues.apache.org/jira/browse/SPARK-16785 Patched SparkR] to allow user defined functions returning binary columns. This allows implementation of different data structures in SparkR.
* Updated [https://github.com/vertica/ddR/wiki/Design design documents] with suggested changes to DDR's internal design and object oriented model.
* Improved [https://github.com/vertica/ddR/pull/15 testing and ddR internals].
=== Outstanding ===
* Agree on a final standard API for distributed computing in R
* Implement at least one scalable backend based on an open-source technology like Spark, SQL, etc
== Open Questions ==
* How can we address the needs of both the end user data scientists and the algorithm implementers?
* How should we share data between R and a system like Spark?
* Is there any way to unify SparkR and sparklyr?
* Could we use the abstractions of tensorflow to partially or fully integrate with platforms like Spark?
== Minutes ==
=== 03/09/2017 ===
* Talk by Brian Lewis
** Has created a Github page with notes on Singularity.
** Singularity is a container technology for HPC applications
** No daemon. Minimum virtualization to get application running. Light weight and has very low overheads.
** Used widely in supercomputers
** All distributed computing platforms even with R skins are difficult to use.
** Containers make it much easier to abstract away the long tail of software dependencies and focus on R
** Demonstrated an example of using Singularity with Tensorflow
** Tried MPI and dopar on the 1000Genome data
** The program parses the variant data and stores chunks as files. Then ran principal components on each file.
** Overload matrix operations to use foreach/MPI underneath.
** Overall: Use existing R operators and overloading them with the appropriate backend.
** Will spend time working on Tensorflow, e.g., take a number of algorithms such as PCA and write them on top of Tensorflow using existing R primitives.
=== 12/08/2016 ===
* Yuan Tang from Uptake was the presenter
** Michael and Indrajit will write a status report for the working group sometime in December or January
** Yuan gave an overview of TensorFlow
** JJ, Dirk and Yuan are working on R layer for TensorFlow
** TensorFlow is a platform for machine learning as well as other computations (even math proofs).
** It is GPU optimized and distributed.
** It is used in search, speech recognition, Google photos, etc.
** TensorFlow computations are directed graphs. odes are operations and edges are tensors.
** A lot of array, matrix, etc. operations are available
** Backend is mostly C++. Python front end exists.
** TensorFlow R is based on the python fronted
** In multi-device setting, TensorFlow figures out which devices to use and manages communication between devices.
** Computations are fault tolerant
** Yuan has previously worked on Scikit Flow which is now TF.Learn. It’s a easy transition for Scikit learn users.
** Yuan gave a brief overview of the python interface
** TensorFlow in R handles conversion between R and Python. Syntax is very similar to python API
** Future work: Adding more examples and tutorials, integration with Kubernetes/Marathon like framework.
** During the Q/A there were questions related to whether R kernels can be supported in TensorFlow, and whether R dataframes are a natural wrapper for TensorFlow objects.
=== 11/10/2016 ===
* SparkR slides were presented by Hossein Faliki and Shivaram from Databricks and UC Berkeley:
** SparkR was a prototype from AMPLab (2014). Initially it had the RDD API and was similar to PySpark API
** In 2015, the merge with upstream Spark, the decision was made to integrate with the Dataframe API, and hide the RDD API
** In 2016 more MLLib algorithms have been integrated and new APIs have been added. A CRAN package will be released soon
** Original SparkR architecture runs R on the master that communicates with the JVM processes in the driver. the driver sends commands to the worker JVM processes, and executes them as scala/java statements.
** The system can read distributed data inside the JVM from different sources such as S3, HDFS, etc.
** The driver has a socket based connection between SparkR and the RBackend. RBackend runs on the JVM, deserializes the R code, and converts the R statements into Java calls.
** collect() and createDataFrame() are used to move data between R and JVM processes. createDataFrame will convert your local R data into a JVM based distributed data frame.
** The API has IO, Caching, MLLib, and SQL related commands
** Since Spark 2.0, we can run R processes inside the JVM worker processes. There is no need to keep long running R processes.
** There are 3 UDF functions (1) lapply, runs function on different value of a list (2) dapply, runs function on each partition of a data frame. You have to careful about how data is partitioned, and (3) gapply, performs a grouping on different column names and then runs the function on each group.
** The new CRAN package install.spark() will automatically download and install Spark. Automated CRAN checks have been added to every commit to the code. Should be available with Spark 2.1.0
* Q/A
** Currently trying to get zero copy dataframe between python and Spark. Spark 2.0 has an off heap manager that uses Arrow. Once this feature is tested on the Python API, the next step will be integration R.
** Spark dataframes gain from plan optimizations. It is not SparkR specific. R UDFs are still treated as black boxes by the optimizer
** Spark doesn't directly support matrixes. There is no immediate intent to do so either. One can store an array or vector as a single column of a Spark dataframe.
=== 10/13/2016 ===
''Detailed minutes were not taken for this meeting''
* Mario Inchiosa: Microsoft's perspective on distributed computing with R
** Microsoft R Server: abstractions and algorithms for distributed computation on top of open-source R
** Desired features of a distributed API like ddR:
*** Supports PEMA (initialize, processData, updateResults, processResults)
*** Cross-platform
*** Fast runtime
*** Supports algorithm writer and data scientist
*** Comes with a comprehensive set of algorithms
*** Easy deployment
** ddR is making good progress but does not yet meet those requirements
* Indrajit: ddR progress report and next steps
** Recap of Clark's internship
** Next step: implement some of Clark's design suggestions: https://github.com/vertica/ddR/wiki/Design
** Spark integration will be based on sparklyr
** Should we limit Spark interaction to the DataFrame API or directly interact with RDDs?
*** Consensus: will likely need flexibility of RDDs to implement everything we need, e.g., arrays and lists
** Clark and Javier raised concerns about the scalability of sharing data between R and Spark
*** Michael: Spark is a platform in its own right, so interoperability is important, should figure something out
*** Bryan Lewis: Why not use tensor abstraction from tensorflow? Spark supports tensorflow and an R interface is already in the works.
** Michael raised the issue of additional funding from the R Consortium to continue Clark's work
*** Joe Rickert suggested that the working group develop one or more white papers summarizing the findings of the working group for presentation to the Infrastructure Steering Committee.
*** Consensus was in favor of this, and several pointed out that the progress so far has been worthwhile, despite not meeting the specific goals laid out in the proposal.
* Michael: do we want to invite some external speakers, one per meeting, from groups like databricks, tensorflow, etc?
** Consensus was in favor.
=== 9/8/2016 ===
''Detailed minutes were not taken for this meeting''
* Clark Fitzgerald: internship report
** Developed two packages for low-level Spark integration: rddlist, sparklite
** Patched a bug in Spark
** ddR needs refactoring before Spark integration is feasible:
*** dlist, dframe, and darray should be formal classes.
*** Partitions of data should be represented by a distributed list abstraction, and most functions (e.g., dmapply) should be implemented on top of that list.
* Javier: sparklyr update
** Preparing for CRAN release
** Mario: what happened to sparkapi?
*** Javier: sparkapi has been merged into sparklyr in order to avoid overhead of maintaining two packages. ddR can do everything it needs with sparklyr.
* Luke Tierney: Update on the low-level vector abstraction, which might support interfaces like ddR and sparklyr.
** Overall approach seems feasible, but still working out a few details.
** Will land in a branch soon.
* Bernd Bischl: update on the batchtools package
** Successor to BatchJobs based on in-memory database
=== 8/11/2016 ===
''Meeting was canceled due to lack of availability.''
=== 7/14/2016 ===
* Introduced Clark who is the intern funded by R Consortium. Clark is a graduate student from UC Davis. He will work on ddR integration with Spark and improving the core ddR API as well such as adding a distributed apply() for matrices, split function, etc.
* Bernd: Can I play around with ddR now? What backend should I use? How robust is the code?
** Clark: It's in good enough shape to be played around with. We will continue to improve it. Hopefully the spark integration will be done before the end of my internship in September.
* Q: Is anyone working on using ddR to make ML scale better.
** Indrajit: We have kmeans, glm, etc. already in CRAN.
** Michael Kane: We are working on glmnet and other packages related to algorithm development.
* Javier gave a demo of sparklyr and sparkapi.
** Motivation for the pacakage: The SparkR package overrides the dplyr interface. This is an issue for RStudio. SparkR is not a CRAN package which makes it difficult to add changes. dplyr is the most popular tool by RStudio and is broken on SparkR.
** Sparklyr provides a dplyr interface. It will also support ML like interfaces, such as consuming a ML model.
** Sparklyr does not currently support any distributed computing features. Instead we can recommend ddR as the distributed computing framework on top of sparkapi. We will put the code in CRAN in a couple of weeks.
** Simon: Can you talk more about the wrapper/low level API to work with Spark?
*** Javier: The package underneath the cover is called "sparkapi" it is to be used by pacakge builders. "spark_context()" and "invoke()" are the functionality to call scala methods. It does not you to currently run R user defined functions. I am currently working on enabling that feature. Depending upon the interest in using ddR with sparkapi, I can spend more time to make sparkapi feature rich.
** Indrajit: What versions of Spark are supported
*** Javier: Anything after 1.6
** Bernd: How do you export data?
*** Javier: We are using all the code from SparkR. So everything in SparkR should continue to work. We don't need to change SparkR. We just need to maintain compatibility.
** Bernd: What happens when the RDDs are very large?
*** Javier: Spark will spill on disk.
* Michael Kane: Presented examples that he implemented on ddR.
** Talked about how the different distributed packages compare to each other in terms of functionality.
** Michael K. looked at glm and truncated SVD on ddR. Was able to implement irls on ddR by implementing two distributed functions such as "cross". In truncated SVD only needed to overload two different distributed multiplications.
** Ran these algorithms on the 1000 genome dataset.
** Overall liked ddR since it was easy to implement the algorithms in the package.
** New ideas:
*** Trying to separate the data layer from the execution layer
*** Create an API that works on "chunks" (which is similar to the "parts" API in ddR). Would like to add these APIs to ddR.
*** Indrajit: You should be able to get some of the chunk like features by using parts and dmapply. E.g., you can call dmapply to read 10 different files, which correspond 10 chunks now. These are however wrapped as a darray or dframe. But you can continue to work on the individual chunks by using parts(i).
=== 6/2/2016 ===
* Round table introduction
* (Michael) Goals for the group:
** Make a common abstraction/interfaces to make it easier to work with distributed data and R
** Unify the interface
** Working group will run for a year. Get an API defined, get at least one open source reference implementations
** not everyone needs to work hands on. We will create smaller groups to focus on those aspects.
** We tried to get a diverse group of participants
* Logistics: meet monthly, focus groups may meet more often
* R Consoritum may be able to figure ways to fund smaller projects that come out of the working group
* Michael Kane: Should we start with an inventory of what is available and people are using?
** Michael Lawrence: Yes, we should find the collection of tools as well as the use cases that are common.
** Joe: I will figure out a wiki space.
* Javier: Who are the end users? Simon: Common layer needed to get algorithms working. We started from algos and tried to find the minimal common api. One of the goals is to make sure everyone is on the same page and not trying to create his/her own custom interface.
* Javier: Should we try to get people with more algo expertise?
* Joe: Simon do you have a stack diagram?
* Simon: Can we get R Consortium to help write things up and draw things?
* Next meeting: Javier is going to present SparkR next time.
f5a3b7e0a40fda707980c4f44821f041495c4e2a
57
56
2016-12-09T18:48:40Z
MichaelLawrence
9
Yuan works at Uptake
wikitext
text/x-wiki
== Goals and Purpose ==
The Distributed Computing Working Group will endorse the design of a common abstraction for distributed data structures in R. We aim to have at least one open-source implementation, as well as a SQL implementation, released within a year of forming the group.
== Members ==
* '''Michael Lawrence''' (Genentech)
* '''Indrajit Roy''' (HP Enterprise)
* ''Joe Rickert'' (ISC liason, RStudio)
* Bernd Bischl (LMU)
* Matt Dowle (H2O)
* Mario Inchiosa (Microsoft)
* Michael Kane (Yale)
* Javier Luraschi (RStudio)
* Edward Ma (HP Enterprise)
* Luke Tierney (University of Iowa)
* Simon Urbanek (AT&T)
* Bryan Lewis (Paradigm4)
* Hossein Falaki (databricks)
== Milestones ==
=== Achieved ===
* Adopt ddR as a prototype for a standard API for distributed computing in R
=== 2016 Internship ===
Clark Fitzgerald, a PhD student in the UC Davis Statistics department, worked on ddR and Spark integration.
* Wrote [https://github.com/clarkfitzg/sparklite sparklite] and [https://github.com/clarkfitzg/rddlist rddlist] as minimal proof-of-concept R packages to connect and store general data on Spark. [https://docs.google.com/presentation/d/1WfUQ2ockNku90GWMXonEhUEcVOWcgBmWwt5uYSSBYPY/edit?usp=sharing slides]
* [https://issues.apache.org/jira/browse/SPARK-16785 Patched SparkR] to allow user defined functions returning binary columns. This allows implementation of different data structures in SparkR.
* Updated [https://github.com/vertica/ddR/wiki/Design design documents] with suggested changes to DDR's internal design and object oriented model.
* Improved [https://github.com/vertica/ddR/pull/15 testing and ddR internals].
=== Outstanding ===
* Agree on a final standard API for distributed computing in R
* Implement at least one scalable backend based on an open-source technology like Spark, SQL, etc
== Open Questions ==
* How can we address the needs of both the end user data scientists and the algorithm implementers?
* How should we share data between R and a system like Spark?
* Is there any way to unify SparkR and sparklyr?
* Could we use the abstractions of tensorflow to partially or fully integrate with platforms like Spark?
== Minutes ==
=== 12/08/2016 ===
* Yuan Tang from Uptake was the presenter
** Michael and Indrajit will write a status report for the working group sometime in December or January
** Yuan gave an overview of TensorFlow
** JJ, Dirk and Yuan are working on R layer for TensorFlow
** TensorFlow is a platform for machine learning as well as other computations (even math proofs).
** It is GPU optimized and distributed.
** It is used in search, speech recognition, Google photos, etc.
** TensorFlow computations are directed graphs. odes are operations and edges are tensors.
** A lot of array, matrix, etc. operations are available
** Backend is mostly C++. Python front end exists.
** TensorFlow R is based on the python fronted
** In multi-device setting, TensorFlow figures out which devices to use and manages communication between devices.
** Computations are fault tolerant
** Yuan has previously worked on Scikit Flow which is now TF.Learn. It’s a easy transition for Scikit learn users.
** Yuan gave a brief overview of the python interface
** TensorFlow in R handles conversion between R and Python. Syntax is very similar to python API
** Future work: Adding more examples and tutorials, integration with Kubernetes/Marathon like framework.
** During the Q/A there were questions related to whether R kernels can be supported in TensorFlow, and whether R dataframes are a natural wrapper for TensorFlow objects.
=== 11/10/2016 ===
* SparkR slides were presented by Hossein Faliki and Shivaram from Databricks and UC Berkeley:
** SparkR was a prototype from AMPLab (2014). Initially it had the RDD API and was similar to PySpark API
** In 2015, the merge with upstream Spark, the decision was made to integrate with the Dataframe API, and hide the RDD API
** In 2016 more MLLib algorithms have been integrated and new APIs have been added. A CRAN package will be released soon
** Original SparkR architecture runs R on the master that communicates with the JVM processes in the driver. the driver sends commands to the worker JVM processes, and executes them as scala/java statements.
** The system can read distributed data inside the JVM from different sources such as S3, HDFS, etc.
** The driver has a socket based connection between SparkR and the RBackend. RBackend runs on the JVM, deserializes the R code, and converts the R statements into Java calls.
** collect() and createDataFrame() are used to move data between R and JVM processes. createDataFrame will convert your local R data into a JVM based distributed data frame.
** The API has IO, Caching, MLLib, and SQL related commands
** Since Spark 2.0, we can run R processes inside the JVM worker processes. There is no need to keep long running R processes.
** There are 3 UDF functions (1) lapply, runs function on different value of a list (2) dapply, runs function on each partition of a data frame. You have to careful about how data is partitioned, and (3) gapply, performs a grouping on different column names and then runs the function on each group.
** The new CRAN package install.spark() will automatically download and install Spark. Automated CRAN checks have been added to every commit to the code. Should be available with Spark 2.1.0
* Q/A
** Currently trying to get zero copy dataframe between python and Spark. Spark 2.0 has an off heap manager that uses Arrow. Once this feature is tested on the Python API, the next step will be integration R.
** Spark dataframes gain from plan optimizations. It is not SparkR specific. R UDFs are still treated as black boxes by the optimizer
** Spark doesn't directly support matrixes. There is no immediate intent to do so either. One can store an array or vector as a single column of a Spark dataframe.
=== 10/13/2016 ===
''Detailed minutes were not taken for this meeting''
* Mario Inchiosa: Microsoft's perspective on distributed computing with R
** Microsoft R Server: abstractions and algorithms for distributed computation on top of open-source R
** Desired features of a distributed API like ddR:
*** Supports PEMA (initialize, processData, updateResults, processResults)
*** Cross-platform
*** Fast runtime
*** Supports algorithm writer and data scientist
*** Comes with a comprehensive set of algorithms
*** Easy deployment
** ddR is making good progress but does not yet meet those requirements
* Indrajit: ddR progress report and next steps
** Recap of Clark's internship
** Next step: implement some of Clark's design suggestions: https://github.com/vertica/ddR/wiki/Design
** Spark integration will be based on sparklyr
** Should we limit Spark interaction to the DataFrame API or directly interact with RDDs?
*** Consensus: will likely need flexibility of RDDs to implement everything we need, e.g., arrays and lists
** Clark and Javier raised concerns about the scalability of sharing data between R and Spark
*** Michael: Spark is a platform in its own right, so interoperability is important, should figure something out
*** Bryan Lewis: Why not use tensor abstraction from tensorflow? Spark supports tensorflow and an R interface is already in the works.
** Michael raised the issue of additional funding from the R Consortium to continue Clark's work
*** Joe Rickert suggested that the working group develop one or more white papers summarizing the findings of the working group for presentation to the Infrastructure Steering Committee.
*** Consensus was in favor of this, and several pointed out that the progress so far has been worthwhile, despite not meeting the specific goals laid out in the proposal.
* Michael: do we want to invite some external speakers, one per meeting, from groups like databricks, tensorflow, etc?
** Consensus was in favor.
=== 9/8/2016 ===
''Detailed minutes were not taken for this meeting''
* Clark Fitzgerald: internship report
** Developed two packages for low-level Spark integration: rddlist, sparklite
** Patched a bug in Spark
** ddR needs refactoring before Spark integration is feasible:
*** dlist, dframe, and darray should be formal classes.
*** Partitions of data should be represented by a distributed list abstraction, and most functions (e.g., dmapply) should be implemented on top of that list.
* Javier: sparklyr update
** Preparing for CRAN release
** Mario: what happened to sparkapi?
*** Javier: sparkapi has been merged into sparklyr in order to avoid overhead of maintaining two packages. ddR can do everything it needs with sparklyr.
* Luke Tierney: Update on the low-level vector abstraction, which might support interfaces like ddR and sparklyr.
** Overall approach seems feasible, but still working out a few details.
** Will land in a branch soon.
* Bernd Bischl: update on the batchtools package
** Successor to BatchJobs based on in-memory database
=== 8/11/2016 ===
''Meeting was canceled due to lack of availability.''
=== 7/14/2016 ===
* Introduced Clark who is the intern funded by R Consortium. Clark is a graduate student from UC Davis. He will work on ddR integration with Spark and improving the core ddR API as well such as adding a distributed apply() for matrices, split function, etc.
* Bernd: Can I play around with ddR now? What backend should I use? How robust is the code?
** Clark: It's in good enough shape to be played around with. We will continue to improve it. Hopefully the spark integration will be done before the end of my internship in September.
* Q: Is anyone working on using ddR to make ML scale better.
** Indrajit: We have kmeans, glm, etc. already in CRAN.
** Michael Kane: We are working on glmnet and other packages related to algorithm development.
* Javier gave a demo of sparklyr and sparkapi.
** Motivation for the pacakage: The SparkR package overrides the dplyr interface. This is an issue for RStudio. SparkR is not a CRAN package which makes it difficult to add changes. dplyr is the most popular tool by RStudio and is broken on SparkR.
** Sparklyr provides a dplyr interface. It will also support ML like interfaces, such as consuming a ML model.
** Sparklyr does not currently support any distributed computing features. Instead we can recommend ddR as the distributed computing framework on top of sparkapi. We will put the code in CRAN in a couple of weeks.
** Simon: Can you talk more about the wrapper/low level API to work with Spark?
*** Javier: The package underneath the cover is called "sparkapi" it is to be used by pacakge builders. "spark_context()" and "invoke()" are the functionality to call scala methods. It does not you to currently run R user defined functions. I am currently working on enabling that feature. Depending upon the interest in using ddR with sparkapi, I can spend more time to make sparkapi feature rich.
** Indrajit: What versions of Spark are supported
*** Javier: Anything after 1.6
** Bernd: How do you export data?
*** Javier: We are using all the code from SparkR. So everything in SparkR should continue to work. We don't need to change SparkR. We just need to maintain compatibility.
** Bernd: What happens when the RDDs are very large?
*** Javier: Spark will spill on disk.
* Michael Kane: Presented examples that he implemented on ddR.
** Talked about how the different distributed packages compare to each other in terms of functionality.
** Michael K. looked at glm and truncated SVD on ddR. Was able to implement irls on ddR by implementing two distributed functions such as "cross". In truncated SVD only needed to overload two different distributed multiplications.
** Ran these algorithms on the 1000 genome dataset.
** Overall liked ddR since it was easy to implement the algorithms in the package.
** New ideas:
*** Trying to separate the data layer from the execution layer
*** Create an API that works on "chunks" (which is similar to the "parts" API in ddR). Would like to add these APIs to ddR.
*** Indrajit: You should be able to get some of the chunk like features by using parts and dmapply. E.g., you can call dmapply to read 10 different files, which correspond 10 chunks now. These are however wrapped as a darray or dframe. But you can continue to work on the individual chunks by using parts(i).
=== 6/2/2016 ===
* Round table introduction
* (Michael) Goals for the group:
** Make a common abstraction/interfaces to make it easier to work with distributed data and R
** Unify the interface
** Working group will run for a year. Get an API defined, get at least one open source reference implementations
** not everyone needs to work hands on. We will create smaller groups to focus on those aspects.
** We tried to get a diverse group of participants
* Logistics: meet monthly, focus groups may meet more often
* R Consoritum may be able to figure ways to fund smaller projects that come out of the working group
* Michael Kane: Should we start with an inventory of what is available and people are using?
** Michael Lawrence: Yes, we should find the collection of tools as well as the use cases that are common.
** Joe: I will figure out a wiki space.
* Javier: Who are the end users? Simon: Common layer needed to get algorithms working. We started from algos and tried to find the minimal common api. One of the goals is to make sure everyone is on the same page and not trying to create his/her own custom interface.
* Javier: Should we try to get people with more algo expertise?
* Joe: Simon do you have a stack diagram?
* Simon: Can we get R Consortium to help write things up and draw things?
* Next meeting: Javier is going to present SparkR next time.
4d09a7c767db252eba697474b5f360ec106065ad
56
55
2016-12-09T05:27:29Z
Indrajit roy
14
/* 12/08/2016 */
wikitext
text/x-wiki
== Goals and Purpose ==
The Distributed Computing Working Group will endorse the design of a common abstraction for distributed data structures in R. We aim to have at least one open-source implementation, as well as a SQL implementation, released within a year of forming the group.
== Members ==
* '''Michael Lawrence''' (Genentech)
* '''Indrajit Roy''' (HP Enterprise)
* ''Joe Rickert'' (ISC liason, RStudio)
* Bernd Bischl (LMU)
* Matt Dowle (H2O)
* Mario Inchiosa (Microsoft)
* Michael Kane (Yale)
* Javier Luraschi (RStudio)
* Edward Ma (HP Enterprise)
* Luke Tierney (University of Iowa)
* Simon Urbanek (AT&T)
* Bryan Lewis (Paradigm4)
* Hossein Falaki (databricks)
== Milestones ==
=== Achieved ===
* Adopt ddR as a prototype for a standard API for distributed computing in R
=== 2016 Internship ===
Clark Fitzgerald, a PhD student in the UC Davis Statistics department, worked on ddR and Spark integration.
* Wrote [https://github.com/clarkfitzg/sparklite sparklite] and [https://github.com/clarkfitzg/rddlist rddlist] as minimal proof-of-concept R packages to connect and store general data on Spark. [https://docs.google.com/presentation/d/1WfUQ2ockNku90GWMXonEhUEcVOWcgBmWwt5uYSSBYPY/edit?usp=sharing slides]
* [https://issues.apache.org/jira/browse/SPARK-16785 Patched SparkR] to allow user defined functions returning binary columns. This allows implementation of different data structures in SparkR.
* Updated [https://github.com/vertica/ddR/wiki/Design design documents] with suggested changes to DDR's internal design and object oriented model.
* Improved [https://github.com/vertica/ddR/pull/15 testing and ddR internals].
=== Outstanding ===
* Agree on a final standard API for distributed computing in R
* Implement at least one scalable backend based on an open-source technology like Spark, SQL, etc
== Open Questions ==
* How can we address the needs of both the end user data scientists and the algorithm implementers?
* How should we share data between R and a system like Spark?
* Is there any way to unify SparkR and sparklyr?
* Could we use the abstractions of tensorflow to partially or fully integrate with platforms like Spark?
== Minutes ==
=== 12/08/2016 ===
* Yuan Tang from RStudio was the presenter
** Michael and Indrajit will write a status report for the working group sometime in December or January
** Yuan gave an overview of TensorFlow
** JJ, Dirk and Yuan are working on R layer for TensorFlow
** TensorFlow is a platform for machine learning as well as other computations (even math proofs).
** It is GPU optimized and distributed.
** It is used in search, speech recognition, Google photos, etc.
** TensorFlow computations are directed graphs. odes are operations and edges are tensors.
** A lot of array, matrix, etc. operations are available
** Backend is mostly C++. Python front end exists.
** TensorFlow R is based on the python fronted
** In multi-device setting, TensorFlow figures out which devices to use and manages communication between devices.
** Computations are fault tolerant
** Yuan has previously worked on Scikit Flow which is now TF.Learn. It’s a easy transition for Scikit learn users.
** Yuan gave a brief overview of the python interface
** TensorFlow in R handles conversion between R and Python. Syntax is very similar to python API
** Future work: Adding more examples and tutorials, integration with Kubernetes/Marathon like framework.
** During the Q/A there were questions related to whether R kernels can be supported in TensorFlow, and whether R dataframes are a natural wrapper for TensorFlow objects.
=== 11/10/2016 ===
* SparkR slides were presented by Hossein Faliki and Shivaram from Databricks and UC Berkeley:
** SparkR was a prototype from AMPLab (2014). Initially it had the RDD API and was similar to PySpark API
** In 2015, the merge with upstream Spark, the decision was made to integrate with the Dataframe API, and hide the RDD API
** In 2016 more MLLib algorithms have been integrated and new APIs have been added. A CRAN package will be released soon
** Original SparkR architecture runs R on the master that communicates with the JVM processes in the driver. the driver sends commands to the worker JVM processes, and executes them as scala/java statements.
** The system can read distributed data inside the JVM from different sources such as S3, HDFS, etc.
** The driver has a socket based connection between SparkR and the RBackend. RBackend runs on the JVM, deserializes the R code, and converts the R statements into Java calls.
** collect() and createDataFrame() are used to move data between R and JVM processes. createDataFrame will convert your local R data into a JVM based distributed data frame.
** The API has IO, Caching, MLLib, and SQL related commands
** Since Spark 2.0, we can run R processes inside the JVM worker processes. There is no need to keep long running R processes.
** There are 3 UDF functions (1) lapply, runs function on different value of a list (2) dapply, runs function on each partition of a data frame. You have to careful about how data is partitioned, and (3) gapply, performs a grouping on different column names and then runs the function on each group.
** The new CRAN package install.spark() will automatically download and install Spark. Automated CRAN checks have been added to every commit to the code. Should be available with Spark 2.1.0
* Q/A
** Currently trying to get zero copy dataframe between python and Spark. Spark 2.0 has an off heap manager that uses Arrow. Once this feature is tested on the Python API, the next step will be integration R.
** Spark dataframes gain from plan optimizations. It is not SparkR specific. R UDFs are still treated as black boxes by the optimizer
** Spark doesn't directly support matrixes. There is no immediate intent to do so either. One can store an array or vector as a single column of a Spark dataframe.
=== 10/13/2016 ===
''Detailed minutes were not taken for this meeting''
* Mario Inchiosa: Microsoft's perspective on distributed computing with R
** Microsoft R Server: abstractions and algorithms for distributed computation on top of open-source R
** Desired features of a distributed API like ddR:
*** Supports PEMA (initialize, processData, updateResults, processResults)
*** Cross-platform
*** Fast runtime
*** Supports algorithm writer and data scientist
*** Comes with a comprehensive set of algorithms
*** Easy deployment
** ddR is making good progress but does not yet meet those requirements
* Indrajit: ddR progress report and next steps
** Recap of Clark's internship
** Next step: implement some of Clark's design suggestions: https://github.com/vertica/ddR/wiki/Design
** Spark integration will be based on sparklyr
** Should we limit Spark interaction to the DataFrame API or directly interact with RDDs?
*** Consensus: will likely need flexibility of RDDs to implement everything we need, e.g., arrays and lists
** Clark and Javier raised concerns about the scalability of sharing data between R and Spark
*** Michael: Spark is a platform in its own right, so interoperability is important, should figure something out
*** Bryan Lewis: Why not use tensor abstraction from tensorflow? Spark supports tensorflow and an R interface is already in the works.
** Michael raised the issue of additional funding from the R Consortium to continue Clark's work
*** Joe Rickert suggested that the working group develop one or more white papers summarizing the findings of the working group for presentation to the Infrastructure Steering Committee.
*** Consensus was in favor of this, and several pointed out that the progress so far has been worthwhile, despite not meeting the specific goals laid out in the proposal.
* Michael: do we want to invite some external speakers, one per meeting, from groups like databricks, tensorflow, etc?
** Consensus was in favor.
=== 9/8/2016 ===
''Detailed minutes were not taken for this meeting''
* Clark Fitzgerald: internship report
** Developed two packages for low-level Spark integration: rddlist, sparklite
** Patched a bug in Spark
** ddR needs refactoring before Spark integration is feasible:
*** dlist, dframe, and darray should be formal classes.
*** Partitions of data should be represented by a distributed list abstraction, and most functions (e.g., dmapply) should be implemented on top of that list.
* Javier: sparklyr update
** Preparing for CRAN release
** Mario: what happened to sparkapi?
*** Javier: sparkapi has been merged into sparklyr in order to avoid overhead of maintaining two packages. ddR can do everything it needs with sparklyr.
* Luke Tierney: Update on the low-level vector abstraction, which might support interfaces like ddR and sparklyr.
** Overall approach seems feasible, but still working out a few details.
** Will land in a branch soon.
* Bernd Bischl: update on the batchtools package
** Successor to BatchJobs based on in-memory database
=== 8/11/2016 ===
''Meeting was canceled due to lack of availability.''
=== 7/14/2016 ===
* Introduced Clark who is the intern funded by R Consortium. Clark is a graduate student from UC Davis. He will work on ddR integration with Spark and improving the core ddR API as well such as adding a distributed apply() for matrices, split function, etc.
* Bernd: Can I play around with ddR now? What backend should I use? How robust is the code?
** Clark: It's in good enough shape to be played around with. We will continue to improve it. Hopefully the spark integration will be done before the end of my internship in September.
* Q: Is anyone working on using ddR to make ML scale better.
** Indrajit: We have kmeans, glm, etc. already in CRAN.
** Michael Kane: We are working on glmnet and other packages related to algorithm development.
* Javier gave a demo of sparklyr and sparkapi.
** Motivation for the pacakage: The SparkR package overrides the dplyr interface. This is an issue for RStudio. SparkR is not a CRAN package which makes it difficult to add changes. dplyr is the most popular tool by RStudio and is broken on SparkR.
** Sparklyr provides a dplyr interface. It will also support ML like interfaces, such as consuming a ML model.
** Sparklyr does not currently support any distributed computing features. Instead we can recommend ddR as the distributed computing framework on top of sparkapi. We will put the code in CRAN in a couple of weeks.
** Simon: Can you talk more about the wrapper/low level API to work with Spark?
*** Javier: The package underneath the cover is called "sparkapi" it is to be used by pacakge builders. "spark_context()" and "invoke()" are the functionality to call scala methods. It does not you to currently run R user defined functions. I am currently working on enabling that feature. Depending upon the interest in using ddR with sparkapi, I can spend more time to make sparkapi feature rich.
** Indrajit: What versions of Spark are supported
*** Javier: Anything after 1.6
** Bernd: How do you export data?
*** Javier: We are using all the code from SparkR. So everything in SparkR should continue to work. We don't need to change SparkR. We just need to maintain compatibility.
** Bernd: What happens when the RDDs are very large?
*** Javier: Spark will spill on disk.
* Michael Kane: Presented examples that he implemented on ddR.
** Talked about how the different distributed packages compare to each other in terms of functionality.
** Michael K. looked at glm and truncated SVD on ddR. Was able to implement irls on ddR by implementing two distributed functions such as "cross". In truncated SVD only needed to overload two different distributed multiplications.
** Ran these algorithms on the 1000 genome dataset.
** Overall liked ddR since it was easy to implement the algorithms in the package.
** New ideas:
*** Trying to separate the data layer from the execution layer
*** Create an API that works on "chunks" (which is similar to the "parts" API in ddR). Would like to add these APIs to ddR.
*** Indrajit: You should be able to get some of the chunk like features by using parts and dmapply. E.g., you can call dmapply to read 10 different files, which correspond 10 chunks now. These are however wrapped as a darray or dframe. But you can continue to work on the individual chunks by using parts(i).
=== 6/2/2016 ===
* Round table introduction
* (Michael) Goals for the group:
** Make a common abstraction/interfaces to make it easier to work with distributed data and R
** Unify the interface
** Working group will run for a year. Get an API defined, get at least one open source reference implementations
** not everyone needs to work hands on. We will create smaller groups to focus on those aspects.
** We tried to get a diverse group of participants
* Logistics: meet monthly, focus groups may meet more often
* R Consoritum may be able to figure ways to fund smaller projects that come out of the working group
* Michael Kane: Should we start with an inventory of what is available and people are using?
** Michael Lawrence: Yes, we should find the collection of tools as well as the use cases that are common.
** Joe: I will figure out a wiki space.
* Javier: Who are the end users? Simon: Common layer needed to get algorithms working. We started from algos and tried to find the minimal common api. One of the goals is to make sure everyone is on the same page and not trying to create his/her own custom interface.
* Javier: Should we try to get people with more algo expertise?
* Joe: Simon do you have a stack diagram?
* Simon: Can we get R Consortium to help write things up and draw things?
* Next meeting: Javier is going to present SparkR next time.
41e820a895d372f0b168bc2a8a8479b3f18a5555
55
54
2016-12-09T05:26:16Z
Indrajit roy
14
/* Minutes */
wikitext
text/x-wiki
== Goals and Purpose ==
The Distributed Computing Working Group will endorse the design of a common abstraction for distributed data structures in R. We aim to have at least one open-source implementation, as well as a SQL implementation, released within a year of forming the group.
== Members ==
* '''Michael Lawrence''' (Genentech)
* '''Indrajit Roy''' (HP Enterprise)
* ''Joe Rickert'' (ISC liason, RStudio)
* Bernd Bischl (LMU)
* Matt Dowle (H2O)
* Mario Inchiosa (Microsoft)
* Michael Kane (Yale)
* Javier Luraschi (RStudio)
* Edward Ma (HP Enterprise)
* Luke Tierney (University of Iowa)
* Simon Urbanek (AT&T)
* Bryan Lewis (Paradigm4)
* Hossein Falaki (databricks)
== Milestones ==
=== Achieved ===
* Adopt ddR as a prototype for a standard API for distributed computing in R
=== 2016 Internship ===
Clark Fitzgerald, a PhD student in the UC Davis Statistics department, worked on ddR and Spark integration.
* Wrote [https://github.com/clarkfitzg/sparklite sparklite] and [https://github.com/clarkfitzg/rddlist rddlist] as minimal proof-of-concept R packages to connect and store general data on Spark. [https://docs.google.com/presentation/d/1WfUQ2ockNku90GWMXonEhUEcVOWcgBmWwt5uYSSBYPY/edit?usp=sharing slides]
* [https://issues.apache.org/jira/browse/SPARK-16785 Patched SparkR] to allow user defined functions returning binary columns. This allows implementation of different data structures in SparkR.
* Updated [https://github.com/vertica/ddR/wiki/Design design documents] with suggested changes to DDR's internal design and object oriented model.
* Improved [https://github.com/vertica/ddR/pull/15 testing and ddR internals].
=== Outstanding ===
* Agree on a final standard API for distributed computing in R
* Implement at least one scalable backend based on an open-source technology like Spark, SQL, etc
== Open Questions ==
* How can we address the needs of both the end user data scientists and the algorithm implementers?
* How should we share data between R and a system like Spark?
* Is there any way to unify SparkR and sparklyr?
* Could we use the abstractions of tensorflow to partially or fully integrate with platforms like Spark?
== Minutes ==
=== 12/08/2016 ===
* Yuan Tang from RStudio was the presenter
** Michael and Indrajit will write a status report for the working group sometime in December or January
** Yuan gave an overview of TensorFlow
** JJ, Dirk and Yuan are working on R layer for TensorFlow
** TensorFlow is a platform for machine learning as well as other computations (even math proofs).
** It is GPU optimized and distributed.
** It is used in search, speech recognition, Google photos, etc.
** TensorFlow computations are directed graphs. odes are operations and edges are tensors.
** A lot of array, matrix, etc. operations are available
** Backend is mostly C++. Python front end exists.
** TensorFlow R is based on the python fronted
** In multi-device setting, TensorFlow figures out which devices to use and manages communication between devices.
** Computations are fault tolerant
** Yuan has previously worked on Scikit Flow which is now TF.Learn. It’s a easy transition for Scikit learn users.
** Yuan gave a brief overview of the python interface
** TensorFlow in R handles conversion between R and Python. Syntax is very similar to python API
** Future work: Adding more examples and tutorials, integration with Kubernetes/Marathon like framework.
=== 11/10/2016 ===
* SparkR slides were presented by Hossein Faliki and Shivaram from Databricks and UC Berkeley:
** SparkR was a prototype from AMPLab (2014). Initially it had the RDD API and was similar to PySpark API
** In 2015, the merge with upstream Spark, the decision was made to integrate with the Dataframe API, and hide the RDD API
** In 2016 more MLLib algorithms have been integrated and new APIs have been added. A CRAN package will be released soon
** Original SparkR architecture runs R on the master that communicates with the JVM processes in the driver. the driver sends commands to the worker JVM processes, and executes them as scala/java statements.
** The system can read distributed data inside the JVM from different sources such as S3, HDFS, etc.
** The driver has a socket based connection between SparkR and the RBackend. RBackend runs on the JVM, deserializes the R code, and converts the R statements into Java calls.
** collect() and createDataFrame() are used to move data between R and JVM processes. createDataFrame will convert your local R data into a JVM based distributed data frame.
** The API has IO, Caching, MLLib, and SQL related commands
** Since Spark 2.0, we can run R processes inside the JVM worker processes. There is no need to keep long running R processes.
** There are 3 UDF functions (1) lapply, runs function on different value of a list (2) dapply, runs function on each partition of a data frame. You have to careful about how data is partitioned, and (3) gapply, performs a grouping on different column names and then runs the function on each group.
** The new CRAN package install.spark() will automatically download and install Spark. Automated CRAN checks have been added to every commit to the code. Should be available with Spark 2.1.0
* Q/A
** Currently trying to get zero copy dataframe between python and Spark. Spark 2.0 has an off heap manager that uses Arrow. Once this feature is tested on the Python API, the next step will be integration R.
** Spark dataframes gain from plan optimizations. It is not SparkR specific. R UDFs are still treated as black boxes by the optimizer
** Spark doesn't directly support matrixes. There is no immediate intent to do so either. One can store an array or vector as a single column of a Spark dataframe.
=== 10/13/2016 ===
''Detailed minutes were not taken for this meeting''
* Mario Inchiosa: Microsoft's perspective on distributed computing with R
** Microsoft R Server: abstractions and algorithms for distributed computation on top of open-source R
** Desired features of a distributed API like ddR:
*** Supports PEMA (initialize, processData, updateResults, processResults)
*** Cross-platform
*** Fast runtime
*** Supports algorithm writer and data scientist
*** Comes with a comprehensive set of algorithms
*** Easy deployment
** ddR is making good progress but does not yet meet those requirements
* Indrajit: ddR progress report and next steps
** Recap of Clark's internship
** Next step: implement some of Clark's design suggestions: https://github.com/vertica/ddR/wiki/Design
** Spark integration will be based on sparklyr
** Should we limit Spark interaction to the DataFrame API or directly interact with RDDs?
*** Consensus: will likely need flexibility of RDDs to implement everything we need, e.g., arrays and lists
** Clark and Javier raised concerns about the scalability of sharing data between R and Spark
*** Michael: Spark is a platform in its own right, so interoperability is important, should figure something out
*** Bryan Lewis: Why not use tensor abstraction from tensorflow? Spark supports tensorflow and an R interface is already in the works.
** Michael raised the issue of additional funding from the R Consortium to continue Clark's work
*** Joe Rickert suggested that the working group develop one or more white papers summarizing the findings of the working group for presentation to the Infrastructure Steering Committee.
*** Consensus was in favor of this, and several pointed out that the progress so far has been worthwhile, despite not meeting the specific goals laid out in the proposal.
* Michael: do we want to invite some external speakers, one per meeting, from groups like databricks, tensorflow, etc?
** Consensus was in favor.
=== 9/8/2016 ===
''Detailed minutes were not taken for this meeting''
* Clark Fitzgerald: internship report
** Developed two packages for low-level Spark integration: rddlist, sparklite
** Patched a bug in Spark
** ddR needs refactoring before Spark integration is feasible:
*** dlist, dframe, and darray should be formal classes.
*** Partitions of data should be represented by a distributed list abstraction, and most functions (e.g., dmapply) should be implemented on top of that list.
* Javier: sparklyr update
** Preparing for CRAN release
** Mario: what happened to sparkapi?
*** Javier: sparkapi has been merged into sparklyr in order to avoid overhead of maintaining two packages. ddR can do everything it needs with sparklyr.
* Luke Tierney: Update on the low-level vector abstraction, which might support interfaces like ddR and sparklyr.
** Overall approach seems feasible, but still working out a few details.
** Will land in a branch soon.
* Bernd Bischl: update on the batchtools package
** Successor to BatchJobs based on in-memory database
=== 8/11/2016 ===
''Meeting was canceled due to lack of availability.''
=== 7/14/2016 ===
* Introduced Clark who is the intern funded by R Consortium. Clark is a graduate student from UC Davis. He will work on ddR integration with Spark and improving the core ddR API as well such as adding a distributed apply() for matrices, split function, etc.
* Bernd: Can I play around with ddR now? What backend should I use? How robust is the code?
** Clark: It's in good enough shape to be played around with. We will continue to improve it. Hopefully the spark integration will be done before the end of my internship in September.
* Q: Is anyone working on using ddR to make ML scale better.
** Indrajit: We have kmeans, glm, etc. already in CRAN.
** Michael Kane: We are working on glmnet and other packages related to algorithm development.
* Javier gave a demo of sparklyr and sparkapi.
** Motivation for the pacakage: The SparkR package overrides the dplyr interface. This is an issue for RStudio. SparkR is not a CRAN package which makes it difficult to add changes. dplyr is the most popular tool by RStudio and is broken on SparkR.
** Sparklyr provides a dplyr interface. It will also support ML like interfaces, such as consuming a ML model.
** Sparklyr does not currently support any distributed computing features. Instead we can recommend ddR as the distributed computing framework on top of sparkapi. We will put the code in CRAN in a couple of weeks.
** Simon: Can you talk more about the wrapper/low level API to work with Spark?
*** Javier: The package underneath the cover is called "sparkapi" it is to be used by pacakge builders. "spark_context()" and "invoke()" are the functionality to call scala methods. It does not you to currently run R user defined functions. I am currently working on enabling that feature. Depending upon the interest in using ddR with sparkapi, I can spend more time to make sparkapi feature rich.
** Indrajit: What versions of Spark are supported
*** Javier: Anything after 1.6
** Bernd: How do you export data?
*** Javier: We are using all the code from SparkR. So everything in SparkR should continue to work. We don't need to change SparkR. We just need to maintain compatibility.
** Bernd: What happens when the RDDs are very large?
*** Javier: Spark will spill on disk.
* Michael Kane: Presented examples that he implemented on ddR.
** Talked about how the different distributed packages compare to each other in terms of functionality.
** Michael K. looked at glm and truncated SVD on ddR. Was able to implement irls on ddR by implementing two distributed functions such as "cross". In truncated SVD only needed to overload two different distributed multiplications.
** Ran these algorithms on the 1000 genome dataset.
** Overall liked ddR since it was easy to implement the algorithms in the package.
** New ideas:
*** Trying to separate the data layer from the execution layer
*** Create an API that works on "chunks" (which is similar to the "parts" API in ddR). Would like to add these APIs to ddR.
*** Indrajit: You should be able to get some of the chunk like features by using parts and dmapply. E.g., you can call dmapply to read 10 different files, which correspond 10 chunks now. These are however wrapped as a darray or dframe. But you can continue to work on the individual chunks by using parts(i).
=== 6/2/2016 ===
* Round table introduction
* (Michael) Goals for the group:
** Make a common abstraction/interfaces to make it easier to work with distributed data and R
** Unify the interface
** Working group will run for a year. Get an API defined, get at least one open source reference implementations
** not everyone needs to work hands on. We will create smaller groups to focus on those aspects.
** We tried to get a diverse group of participants
* Logistics: meet monthly, focus groups may meet more often
* R Consoritum may be able to figure ways to fund smaller projects that come out of the working group
* Michael Kane: Should we start with an inventory of what is available and people are using?
** Michael Lawrence: Yes, we should find the collection of tools as well as the use cases that are common.
** Joe: I will figure out a wiki space.
* Javier: Who are the end users? Simon: Common layer needed to get algorithms working. We started from algos and tried to find the minimal common api. One of the goals is to make sure everyone is on the same page and not trying to create his/her own custom interface.
* Javier: Should we try to get people with more algo expertise?
* Joe: Simon do you have a stack diagram?
* Simon: Can we get R Consortium to help write things up and draw things?
* Next meeting: Javier is going to present SparkR next time.
e1eafae30b37ee2d2067b37ef1ed73d52b97ecf1
54
53
2016-11-10T22:42:50Z
Indrajit roy
14
/* 11/10/2016 */
wikitext
text/x-wiki
== Goals and Purpose ==
The Distributed Computing Working Group will endorse the design of a common abstraction for distributed data structures in R. We aim to have at least one open-source implementation, as well as a SQL implementation, released within a year of forming the group.
== Members ==
* '''Michael Lawrence''' (Genentech)
* '''Indrajit Roy''' (HP Enterprise)
* ''Joe Rickert'' (ISC liason, RStudio)
* Bernd Bischl (LMU)
* Matt Dowle (H2O)
* Mario Inchiosa (Microsoft)
* Michael Kane (Yale)
* Javier Luraschi (RStudio)
* Edward Ma (HP Enterprise)
* Luke Tierney (University of Iowa)
* Simon Urbanek (AT&T)
* Bryan Lewis (Paradigm4)
* Hossein Falaki (databricks)
== Milestones ==
=== Achieved ===
* Adopt ddR as a prototype for a standard API for distributed computing in R
=== 2016 Internship ===
Clark Fitzgerald, a PhD student in the UC Davis Statistics department, worked on ddR and Spark integration.
* Wrote [https://github.com/clarkfitzg/sparklite sparklite] and [https://github.com/clarkfitzg/rddlist rddlist] as minimal proof-of-concept R packages to connect and store general data on Spark. [https://docs.google.com/presentation/d/1WfUQ2ockNku90GWMXonEhUEcVOWcgBmWwt5uYSSBYPY/edit?usp=sharing slides]
* [https://issues.apache.org/jira/browse/SPARK-16785 Patched SparkR] to allow user defined functions returning binary columns. This allows implementation of different data structures in SparkR.
* Updated [https://github.com/vertica/ddR/wiki/Design design documents] with suggested changes to DDR's internal design and object oriented model.
* Improved [https://github.com/vertica/ddR/pull/15 testing and ddR internals].
=== Outstanding ===
* Agree on a final standard API for distributed computing in R
* Implement at least one scalable backend based on an open-source technology like Spark, SQL, etc
== Open Questions ==
* How can we address the needs of both the end user data scientists and the algorithm implementers?
* How should we share data between R and a system like Spark?
* Is there any way to unify SparkR and sparklyr?
* Could we use the abstractions of tensorflow to partially or fully integrate with platforms like Spark?
== Minutes ==
=== 11/10/2016 ===
* SparkR slides were presented by Hossein Faliki and Shivaram from Databricks and UC Berkeley:
** SparkR was a prototype from AMPLab (2014). Initially it had the RDD API and was similar to PySpark API
** In 2015, the merge with upstream Spark, the decision was made to integrate with the Dataframe API, and hide the RDD API
** In 2016 more MLLib algorithms have been integrated and new APIs have been added. A CRAN package will be released soon
** Original SparkR architecture runs R on the master that communicates with the JVM processes in the driver. the driver sends commands to the worker JVM processes, and executes them as scala/java statements.
** The system can read distributed data inside the JVM from different sources such as S3, HDFS, etc.
** The driver has a socket based connection between SparkR and the RBackend. RBackend runs on the JVM, deserializes the R code, and converts the R statements into Java calls.
** collect() and createDataFrame() are used to move data between R and JVM processes. createDataFrame will convert your local R data into a JVM based distributed data frame.
** The API has IO, Caching, MLLib, and SQL related commands
** Since Spark 2.0, we can run R processes inside the JVM worker processes. There is no need to keep long running R processes.
** There are 3 UDF functions (1) lapply, runs function on different value of a list (2) dapply, runs function on each partition of a data frame. You have to careful about how data is partitioned, and (3) gapply, performs a grouping on different column names and then runs the function on each group.
** The new CRAN package install.spark() will automatically download and install Spark. Automated CRAN checks have been added to every commit to the code. Should be available with Spark 2.1.0
* Q/A
** Currently trying to get zero copy dataframe between python and Spark. Spark 2.0 has an off heap manager that uses Arrow. Once this feature is tested on the Python API, the next step will be integration R.
** Spark dataframes gain from plan optimizations. It is not SparkR specific. R UDFs are still treated as black boxes by the optimizer
** Spark doesn't directly support matrixes. There is no immediate intent to do so either. One can store an array or vector as a single column of a Spark dataframe.
=== 10/13/2016 ===
''Detailed minutes were not taken for this meeting''
* Mario Inchiosa: Microsoft's perspective on distributed computing with R
** Microsoft R Server: abstractions and algorithms for distributed computation on top of open-source R
** Desired features of a distributed API like ddR:
*** Supports PEMA (initialize, processData, updateResults, processResults)
*** Cross-platform
*** Fast runtime
*** Supports algorithm writer and data scientist
*** Comes with a comprehensive set of algorithms
*** Easy deployment
** ddR is making good progress but does not yet meet those requirements
* Indrajit: ddR progress report and next steps
** Recap of Clark's internship
** Next step: implement some of Clark's design suggestions: https://github.com/vertica/ddR/wiki/Design
** Spark integration will be based on sparklyr
** Should we limit Spark interaction to the DataFrame API or directly interact with RDDs?
*** Consensus: will likely need flexibility of RDDs to implement everything we need, e.g., arrays and lists
** Clark and Javier raised concerns about the scalability of sharing data between R and Spark
*** Michael: Spark is a platform in its own right, so interoperability is important, should figure something out
*** Bryan Lewis: Why not use tensor abstraction from tensorflow? Spark supports tensorflow and an R interface is already in the works.
** Michael raised the issue of additional funding from the R Consortium to continue Clark's work
*** Joe Rickert suggested that the working group develop one or more white papers summarizing the findings of the working group for presentation to the Infrastructure Steering Committee.
*** Consensus was in favor of this, and several pointed out that the progress so far has been worthwhile, despite not meeting the specific goals laid out in the proposal.
* Michael: do we want to invite some external speakers, one per meeting, from groups like databricks, tensorflow, etc?
** Consensus was in favor.
=== 9/8/2016 ===
''Detailed minutes were not taken for this meeting''
* Clark Fitzgerald: internship report
** Developed two packages for low-level Spark integration: rddlist, sparklite
** Patched a bug in Spark
** ddR needs refactoring before Spark integration is feasible:
*** dlist, dframe, and darray should be formal classes.
*** Partitions of data should be represented by a distributed list abstraction, and most functions (e.g., dmapply) should be implemented on top of that list.
* Javier: sparklyr update
** Preparing for CRAN release
** Mario: what happened to sparkapi?
*** Javier: sparkapi has been merged into sparklyr in order to avoid overhead of maintaining two packages. ddR can do everything it needs with sparklyr.
* Luke Tierney: Update on the low-level vector abstraction, which might support interfaces like ddR and sparklyr.
** Overall approach seems feasible, but still working out a few details.
** Will land in a branch soon.
* Bernd Bischl: update on the batchtools package
** Successor to BatchJobs based on in-memory database
=== 8/11/2016 ===
''Meeting was canceled due to lack of availability.''
=== 7/14/2016 ===
* Introduced Clark who is the intern funded by R Consortium. Clark is a graduate student from UC Davis. He will work on ddR integration with Spark and improving the core ddR API as well such as adding a distributed apply() for matrices, split function, etc.
* Bernd: Can I play around with ddR now? What backend should I use? How robust is the code?
** Clark: It's in good enough shape to be played around with. We will continue to improve it. Hopefully the spark integration will be done before the end of my internship in September.
* Q: Is anyone working on using ddR to make ML scale better.
** Indrajit: We have kmeans, glm, etc. already in CRAN.
** Michael Kane: We are working on glmnet and other packages related to algorithm development.
* Javier gave a demo of sparklyr and sparkapi.
** Motivation for the pacakage: The SparkR package overrides the dplyr interface. This is an issue for RStudio. SparkR is not a CRAN package which makes it difficult to add changes. dplyr is the most popular tool by RStudio and is broken on SparkR.
** Sparklyr provides a dplyr interface. It will also support ML like interfaces, such as consuming a ML model.
** Sparklyr does not currently support any distributed computing features. Instead we can recommend ddR as the distributed computing framework on top of sparkapi. We will put the code in CRAN in a couple of weeks.
** Simon: Can you talk more about the wrapper/low level API to work with Spark?
*** Javier: The package underneath the cover is called "sparkapi" it is to be used by pacakge builders. "spark_context()" and "invoke()" are the functionality to call scala methods. It does not you to currently run R user defined functions. I am currently working on enabling that feature. Depending upon the interest in using ddR with sparkapi, I can spend more time to make sparkapi feature rich.
** Indrajit: What versions of Spark are supported
*** Javier: Anything after 1.6
** Bernd: How do you export data?
*** Javier: We are using all the code from SparkR. So everything in SparkR should continue to work. We don't need to change SparkR. We just need to maintain compatibility.
** Bernd: What happens when the RDDs are very large?
*** Javier: Spark will spill on disk.
* Michael Kane: Presented examples that he implemented on ddR.
** Talked about how the different distributed packages compare to each other in terms of functionality.
** Michael K. looked at glm and truncated SVD on ddR. Was able to implement irls on ddR by implementing two distributed functions such as "cross". In truncated SVD only needed to overload two different distributed multiplications.
** Ran these algorithms on the 1000 genome dataset.
** Overall liked ddR since it was easy to implement the algorithms in the package.
** New ideas:
*** Trying to separate the data layer from the execution layer
*** Create an API that works on "chunks" (which is similar to the "parts" API in ddR). Would like to add these APIs to ddR.
*** Indrajit: You should be able to get some of the chunk like features by using parts and dmapply. E.g., you can call dmapply to read 10 different files, which correspond 10 chunks now. These are however wrapped as a darray or dframe. But you can continue to work on the individual chunks by using parts(i).
=== 6/2/2016 ===
* Round table introduction
* (Michael) Goals for the group:
** Make a common abstraction/interfaces to make it easier to work with distributed data and R
** Unify the interface
** Working group will run for a year. Get an API defined, get at least one open source reference implementations
** not everyone needs to work hands on. We will create smaller groups to focus on those aspects.
** We tried to get a diverse group of participants
* Logistics: meet monthly, focus groups may meet more often
* R Consoritum may be able to figure ways to fund smaller projects that come out of the working group
* Michael Kane: Should we start with an inventory of what is available and people are using?
** Michael Lawrence: Yes, we should find the collection of tools as well as the use cases that are common.
** Joe: I will figure out a wiki space.
* Javier: Who are the end users? Simon: Common layer needed to get algorithms working. We started from algos and tried to find the minimal common api. One of the goals is to make sure everyone is on the same page and not trying to create his/her own custom interface.
* Javier: Should we try to get people with more algo expertise?
* Joe: Simon do you have a stack diagram?
* Simon: Can we get R Consortium to help write things up and draw things?
* Next meeting: Javier is going to present SparkR next time.
a7147f8c97dbe66af90d6e9b40407a24badc6893
53
52
2016-11-10T22:42:19Z
Indrajit roy
14
/* 11/10/2016 */
wikitext
text/x-wiki
== Goals and Purpose ==
The Distributed Computing Working Group will endorse the design of a common abstraction for distributed data structures in R. We aim to have at least one open-source implementation, as well as a SQL implementation, released within a year of forming the group.
== Members ==
* '''Michael Lawrence''' (Genentech)
* '''Indrajit Roy''' (HP Enterprise)
* ''Joe Rickert'' (ISC liason, RStudio)
* Bernd Bischl (LMU)
* Matt Dowle (H2O)
* Mario Inchiosa (Microsoft)
* Michael Kane (Yale)
* Javier Luraschi (RStudio)
* Edward Ma (HP Enterprise)
* Luke Tierney (University of Iowa)
* Simon Urbanek (AT&T)
* Bryan Lewis (Paradigm4)
* Hossein Falaki (databricks)
== Milestones ==
=== Achieved ===
* Adopt ddR as a prototype for a standard API for distributed computing in R
=== 2016 Internship ===
Clark Fitzgerald, a PhD student in the UC Davis Statistics department, worked on ddR and Spark integration.
* Wrote [https://github.com/clarkfitzg/sparklite sparklite] and [https://github.com/clarkfitzg/rddlist rddlist] as minimal proof-of-concept R packages to connect and store general data on Spark. [https://docs.google.com/presentation/d/1WfUQ2ockNku90GWMXonEhUEcVOWcgBmWwt5uYSSBYPY/edit?usp=sharing slides]
* [https://issues.apache.org/jira/browse/SPARK-16785 Patched SparkR] to allow user defined functions returning binary columns. This allows implementation of different data structures in SparkR.
* Updated [https://github.com/vertica/ddR/wiki/Design design documents] with suggested changes to DDR's internal design and object oriented model.
* Improved [https://github.com/vertica/ddR/pull/15 testing and ddR internals].
=== Outstanding ===
* Agree on a final standard API for distributed computing in R
* Implement at least one scalable backend based on an open-source technology like Spark, SQL, etc
== Open Questions ==
* How can we address the needs of both the end user data scientists and the algorithm implementers?
* How should we share data between R and a system like Spark?
* Is there any way to unify SparkR and sparklyr?
* Could we use the abstractions of tensorflow to partially or fully integrate with platforms like Spark?
== Minutes ==
=== 11/10/2016 ===
* SparkR slides were presented by Hossein Faliki and Shivaram from Databricks and UC Berkeley:
** SparkR was a prototype from AMPLab (2014). Initially it had the RDD API and was similar to PySpark API
** In 2015, the merge with upstream Spark, the decision was made to integrate with the Dataframe API, and hide the RDD API
** In 2016 more MLLib algorithms have been integrated and new APIs have been added. A CRAN package will be released soon
** Original SparkR architecture runs R on the master that communicates with the JVM processes in the driver. the driver sends commands to the worker JVM processes, and executes them as scala/java statements.
** The system can read distributed data inside the JVM from different sources such as S3, HDFS, etc.
** The driver has a socket based connection between SparkR and the RBackend. RBackend runs on the JVM, deserializes the R code, and converts the R statements into Java calls.
** collect() and createDataFrame() are used to move data between R and JVM processes. createDataFrame will convert your local R data into a JVM based distributed data frame.
** The API has IO, Caching, MLLib, and SQL related commands
** Since Spark 2.0, we can run R processes inside the JVM worker processes. There is no need to keep long running R processes.
** There are 3 UDF functions (1) lapply, runs function on different value of a list (2) dapply, runs function on each partition of a data frame. You have to careful about how data is partitioned, and (3) gapply, performs a grouping on different column names and then runs the function on each group.
** The new CRAN package install.spark() will automatically download and install Spark. Automated CRAN checks have been added to every commit to the code. Should be available with Spark 2.1.0
* Q/A
** Currently trying to get zero copy dataframe between python and Spark. Spark 2.0 has an off heap manager that uses Arrow. Once this feature is tested on the Python API, the next step will be integration R.
** Spark dataframes gain from plan optimizations. It is not SparkR specific. R UDFs are still
treated as black boxes by the optimizer
** Spark doesn't directly support matrixes. There is no immediate intent to do so either.
One can store an array or vector as a single column of a Spark dataframe.
=== 10/13/2016 ===
''Detailed minutes were not taken for this meeting''
* Mario Inchiosa: Microsoft's perspective on distributed computing with R
** Microsoft R Server: abstractions and algorithms for distributed computation on top of open-source R
** Desired features of a distributed API like ddR:
*** Supports PEMA (initialize, processData, updateResults, processResults)
*** Cross-platform
*** Fast runtime
*** Supports algorithm writer and data scientist
*** Comes with a comprehensive set of algorithms
*** Easy deployment
** ddR is making good progress but does not yet meet those requirements
* Indrajit: ddR progress report and next steps
** Recap of Clark's internship
** Next step: implement some of Clark's design suggestions: https://github.com/vertica/ddR/wiki/Design
** Spark integration will be based on sparklyr
** Should we limit Spark interaction to the DataFrame API or directly interact with RDDs?
*** Consensus: will likely need flexibility of RDDs to implement everything we need, e.g., arrays and lists
** Clark and Javier raised concerns about the scalability of sharing data between R and Spark
*** Michael: Spark is a platform in its own right, so interoperability is important, should figure something out
*** Bryan Lewis: Why not use tensor abstraction from tensorflow? Spark supports tensorflow and an R interface is already in the works.
** Michael raised the issue of additional funding from the R Consortium to continue Clark's work
*** Joe Rickert suggested that the working group develop one or more white papers summarizing the findings of the working group for presentation to the Infrastructure Steering Committee.
*** Consensus was in favor of this, and several pointed out that the progress so far has been worthwhile, despite not meeting the specific goals laid out in the proposal.
* Michael: do we want to invite some external speakers, one per meeting, from groups like databricks, tensorflow, etc?
** Consensus was in favor.
=== 9/8/2016 ===
''Detailed minutes were not taken for this meeting''
* Clark Fitzgerald: internship report
** Developed two packages for low-level Spark integration: rddlist, sparklite
** Patched a bug in Spark
** ddR needs refactoring before Spark integration is feasible:
*** dlist, dframe, and darray should be formal classes.
*** Partitions of data should be represented by a distributed list abstraction, and most functions (e.g., dmapply) should be implemented on top of that list.
* Javier: sparklyr update
** Preparing for CRAN release
** Mario: what happened to sparkapi?
*** Javier: sparkapi has been merged into sparklyr in order to avoid overhead of maintaining two packages. ddR can do everything it needs with sparklyr.
* Luke Tierney: Update on the low-level vector abstraction, which might support interfaces like ddR and sparklyr.
** Overall approach seems feasible, but still working out a few details.
** Will land in a branch soon.
* Bernd Bischl: update on the batchtools package
** Successor to BatchJobs based on in-memory database
=== 8/11/2016 ===
''Meeting was canceled due to lack of availability.''
=== 7/14/2016 ===
* Introduced Clark who is the intern funded by R Consortium. Clark is a graduate student from UC Davis. He will work on ddR integration with Spark and improving the core ddR API as well such as adding a distributed apply() for matrices, split function, etc.
* Bernd: Can I play around with ddR now? What backend should I use? How robust is the code?
** Clark: It's in good enough shape to be played around with. We will continue to improve it. Hopefully the spark integration will be done before the end of my internship in September.
* Q: Is anyone working on using ddR to make ML scale better.
** Indrajit: We have kmeans, glm, etc. already in CRAN.
** Michael Kane: We are working on glmnet and other packages related to algorithm development.
* Javier gave a demo of sparklyr and sparkapi.
** Motivation for the pacakage: The SparkR package overrides the dplyr interface. This is an issue for RStudio. SparkR is not a CRAN package which makes it difficult to add changes. dplyr is the most popular tool by RStudio and is broken on SparkR.
** Sparklyr provides a dplyr interface. It will also support ML like interfaces, such as consuming a ML model.
** Sparklyr does not currently support any distributed computing features. Instead we can recommend ddR as the distributed computing framework on top of sparkapi. We will put the code in CRAN in a couple of weeks.
** Simon: Can you talk more about the wrapper/low level API to work with Spark?
*** Javier: The package underneath the cover is called "sparkapi" it is to be used by pacakge builders. "spark_context()" and "invoke()" are the functionality to call scala methods. It does not you to currently run R user defined functions. I am currently working on enabling that feature. Depending upon the interest in using ddR with sparkapi, I can spend more time to make sparkapi feature rich.
** Indrajit: What versions of Spark are supported
*** Javier: Anything after 1.6
** Bernd: How do you export data?
*** Javier: We are using all the code from SparkR. So everything in SparkR should continue to work. We don't need to change SparkR. We just need to maintain compatibility.
** Bernd: What happens when the RDDs are very large?
*** Javier: Spark will spill on disk.
* Michael Kane: Presented examples that he implemented on ddR.
** Talked about how the different distributed packages compare to each other in terms of functionality.
** Michael K. looked at glm and truncated SVD on ddR. Was able to implement irls on ddR by implementing two distributed functions such as "cross". In truncated SVD only needed to overload two different distributed multiplications.
** Ran these algorithms on the 1000 genome dataset.
** Overall liked ddR since it was easy to implement the algorithms in the package.
** New ideas:
*** Trying to separate the data layer from the execution layer
*** Create an API that works on "chunks" (which is similar to the "parts" API in ddR). Would like to add these APIs to ddR.
*** Indrajit: You should be able to get some of the chunk like features by using parts and dmapply. E.g., you can call dmapply to read 10 different files, which correspond 10 chunks now. These are however wrapped as a darray or dframe. But you can continue to work on the individual chunks by using parts(i).
=== 6/2/2016 ===
* Round table introduction
* (Michael) Goals for the group:
** Make a common abstraction/interfaces to make it easier to work with distributed data and R
** Unify the interface
** Working group will run for a year. Get an API defined, get at least one open source reference implementations
** not everyone needs to work hands on. We will create smaller groups to focus on those aspects.
** We tried to get a diverse group of participants
* Logistics: meet monthly, focus groups may meet more often
* R Consoritum may be able to figure ways to fund smaller projects that come out of the working group
* Michael Kane: Should we start with an inventory of what is available and people are using?
** Michael Lawrence: Yes, we should find the collection of tools as well as the use cases that are common.
** Joe: I will figure out a wiki space.
* Javier: Who are the end users? Simon: Common layer needed to get algorithms working. We started from algos and tried to find the minimal common api. One of the goals is to make sure everyone is on the same page and not trying to create his/her own custom interface.
* Javier: Should we try to get people with more algo expertise?
* Joe: Simon do you have a stack diagram?
* Simon: Can we get R Consortium to help write things up and draw things?
* Next meeting: Javier is going to present SparkR next time.
97f7ba62f54e3eddea96e80da2235f04076c3bf6
52
51
2016-11-10T22:33:28Z
Indrajit roy
14
/* Minutes */
wikitext
text/x-wiki
== Goals and Purpose ==
The Distributed Computing Working Group will endorse the design of a common abstraction for distributed data structures in R. We aim to have at least one open-source implementation, as well as a SQL implementation, released within a year of forming the group.
== Members ==
* '''Michael Lawrence''' (Genentech)
* '''Indrajit Roy''' (HP Enterprise)
* ''Joe Rickert'' (ISC liason, RStudio)
* Bernd Bischl (LMU)
* Matt Dowle (H2O)
* Mario Inchiosa (Microsoft)
* Michael Kane (Yale)
* Javier Luraschi (RStudio)
* Edward Ma (HP Enterprise)
* Luke Tierney (University of Iowa)
* Simon Urbanek (AT&T)
* Bryan Lewis (Paradigm4)
* Hossein Falaki (databricks)
== Milestones ==
=== Achieved ===
* Adopt ddR as a prototype for a standard API for distributed computing in R
=== 2016 Internship ===
Clark Fitzgerald, a PhD student in the UC Davis Statistics department, worked on ddR and Spark integration.
* Wrote [https://github.com/clarkfitzg/sparklite sparklite] and [https://github.com/clarkfitzg/rddlist rddlist] as minimal proof-of-concept R packages to connect and store general data on Spark. [https://docs.google.com/presentation/d/1WfUQ2ockNku90GWMXonEhUEcVOWcgBmWwt5uYSSBYPY/edit?usp=sharing slides]
* [https://issues.apache.org/jira/browse/SPARK-16785 Patched SparkR] to allow user defined functions returning binary columns. This allows implementation of different data structures in SparkR.
* Updated [https://github.com/vertica/ddR/wiki/Design design documents] with suggested changes to DDR's internal design and object oriented model.
* Improved [https://github.com/vertica/ddR/pull/15 testing and ddR internals].
=== Outstanding ===
* Agree on a final standard API for distributed computing in R
* Implement at least one scalable backend based on an open-source technology like Spark, SQL, etc
== Open Questions ==
* How can we address the needs of both the end user data scientists and the algorithm implementers?
* How should we share data between R and a system like Spark?
* Is there any way to unify SparkR and sparklyr?
* Could we use the abstractions of tensorflow to partially or fully integrate with platforms like Spark?
== Minutes ==
=== 11/10/2016 ===
* Slides were presented by Hossein Faliki and Shivaram from Databricks and UC Berkeley:
** SparkR was a prototype was from AMPLab in 2014. Initially it had the RDD API and was similar to PySpark API
** In 2015 the merge with upstream Spark, the decision was made to integrate with the Dataframe API, and hide the RDD API
** In 2016 onwards more MLLib algos have been exposed and new APIs have been added, and a CRAN package will be released soon
** SparkR architecture runs R on the Spark driver that communicates with the JVM processes in the driver, which are sent to the worker JVM processes, and executed as scala/java on workers.
** You can read distributed data inside the JVM from different sources such as S3, HDFS, etc.
** In the driver there is a socket based connection between SparkR and the RBackend which is on the JVM. RBackend deserializes the R code and converts them into Java calls.
** collect() and createDataFrame() are used to move data between R and JVM across sockets. createDataFrame will convert your local R data into a JVM based distributed data frame.
** The API has IO, Caching, MLLib, and SQL commands
** Since Spark 2.0, we can run R processes inside the JVM worker processes. There is no need to keep long running R processes.
** There are 3 UDF functions (1) lapply, run function on different value of a list (2) dapply, run function on each partition of a data frame. You have to careful about how data is partitioned (3) gapply, performs a grouping on differnt column names and then runs the function on each group.
** New CRAN package. install.spark() to automatically download and install Spark. Automated CRAN checks with every commit to the code. Should be available with Spark 2.1.0
** With Spark 2.0, Spark has a its off head manager where they will use Arrow. Once this is tested it on the Python integration we can use it for R. Currently trying to get zero copy dataframe between python and Spark.
* Q/A
** Spark dataframes gain from plan optimizations. It is not SparkR specific. R UDFs are still
treated as black boxes by the optimizer
** Spark doesn't directly support matrixes. There is no immediate intent to do so either.
One can store an array or vector as a single column of a Spark dataframe.
=== 10/13/2016 ===
''Detailed minutes were not taken for this meeting''
* Mario Inchiosa: Microsoft's perspective on distributed computing with R
** Microsoft R Server: abstractions and algorithms for distributed computation on top of open-source R
** Desired features of a distributed API like ddR:
*** Supports PEMA (initialize, processData, updateResults, processResults)
*** Cross-platform
*** Fast runtime
*** Supports algorithm writer and data scientist
*** Comes with a comprehensive set of algorithms
*** Easy deployment
** ddR is making good progress but does not yet meet those requirements
* Indrajit: ddR progress report and next steps
** Recap of Clark's internship
** Next step: implement some of Clark's design suggestions: https://github.com/vertica/ddR/wiki/Design
** Spark integration will be based on sparklyr
** Should we limit Spark interaction to the DataFrame API or directly interact with RDDs?
*** Consensus: will likely need flexibility of RDDs to implement everything we need, e.g., arrays and lists
** Clark and Javier raised concerns about the scalability of sharing data between R and Spark
*** Michael: Spark is a platform in its own right, so interoperability is important, should figure something out
*** Bryan Lewis: Why not use tensor abstraction from tensorflow? Spark supports tensorflow and an R interface is already in the works.
** Michael raised the issue of additional funding from the R Consortium to continue Clark's work
*** Joe Rickert suggested that the working group develop one or more white papers summarizing the findings of the working group for presentation to the Infrastructure Steering Committee.
*** Consensus was in favor of this, and several pointed out that the progress so far has been worthwhile, despite not meeting the specific goals laid out in the proposal.
* Michael: do we want to invite some external speakers, one per meeting, from groups like databricks, tensorflow, etc?
** Consensus was in favor.
=== 9/8/2016 ===
''Detailed minutes were not taken for this meeting''
* Clark Fitzgerald: internship report
** Developed two packages for low-level Spark integration: rddlist, sparklite
** Patched a bug in Spark
** ddR needs refactoring before Spark integration is feasible:
*** dlist, dframe, and darray should be formal classes.
*** Partitions of data should be represented by a distributed list abstraction, and most functions (e.g., dmapply) should be implemented on top of that list.
* Javier: sparklyr update
** Preparing for CRAN release
** Mario: what happened to sparkapi?
*** Javier: sparkapi has been merged into sparklyr in order to avoid overhead of maintaining two packages. ddR can do everything it needs with sparklyr.
* Luke Tierney: Update on the low-level vector abstraction, which might support interfaces like ddR and sparklyr.
** Overall approach seems feasible, but still working out a few details.
** Will land in a branch soon.
* Bernd Bischl: update on the batchtools package
** Successor to BatchJobs based on in-memory database
=== 8/11/2016 ===
''Meeting was canceled due to lack of availability.''
=== 7/14/2016 ===
* Introduced Clark who is the intern funded by R Consortium. Clark is a graduate student from UC Davis. He will work on ddR integration with Spark and improving the core ddR API as well such as adding a distributed apply() for matrices, split function, etc.
* Bernd: Can I play around with ddR now? What backend should I use? How robust is the code?
** Clark: It's in good enough shape to be played around with. We will continue to improve it. Hopefully the spark integration will be done before the end of my internship in September.
* Q: Is anyone working on using ddR to make ML scale better.
** Indrajit: We have kmeans, glm, etc. already in CRAN.
** Michael Kane: We are working on glmnet and other packages related to algorithm development.
* Javier gave a demo of sparklyr and sparkapi.
** Motivation for the pacakage: The SparkR package overrides the dplyr interface. This is an issue for RStudio. SparkR is not a CRAN package which makes it difficult to add changes. dplyr is the most popular tool by RStudio and is broken on SparkR.
** Sparklyr provides a dplyr interface. It will also support ML like interfaces, such as consuming a ML model.
** Sparklyr does not currently support any distributed computing features. Instead we can recommend ddR as the distributed computing framework on top of sparkapi. We will put the code in CRAN in a couple of weeks.
** Simon: Can you talk more about the wrapper/low level API to work with Spark?
*** Javier: The package underneath the cover is called "sparkapi" it is to be used by pacakge builders. "spark_context()" and "invoke()" are the functionality to call scala methods. It does not you to currently run R user defined functions. I am currently working on enabling that feature. Depending upon the interest in using ddR with sparkapi, I can spend more time to make sparkapi feature rich.
** Indrajit: What versions of Spark are supported
*** Javier: Anything after 1.6
** Bernd: How do you export data?
*** Javier: We are using all the code from SparkR. So everything in SparkR should continue to work. We don't need to change SparkR. We just need to maintain compatibility.
** Bernd: What happens when the RDDs are very large?
*** Javier: Spark will spill on disk.
* Michael Kane: Presented examples that he implemented on ddR.
** Talked about how the different distributed packages compare to each other in terms of functionality.
** Michael K. looked at glm and truncated SVD on ddR. Was able to implement irls on ddR by implementing two distributed functions such as "cross". In truncated SVD only needed to overload two different distributed multiplications.
** Ran these algorithms on the 1000 genome dataset.
** Overall liked ddR since it was easy to implement the algorithms in the package.
** New ideas:
*** Trying to separate the data layer from the execution layer
*** Create an API that works on "chunks" (which is similar to the "parts" API in ddR). Would like to add these APIs to ddR.
*** Indrajit: You should be able to get some of the chunk like features by using parts and dmapply. E.g., you can call dmapply to read 10 different files, which correspond 10 chunks now. These are however wrapped as a darray or dframe. But you can continue to work on the individual chunks by using parts(i).
=== 6/2/2016 ===
* Round table introduction
* (Michael) Goals for the group:
** Make a common abstraction/interfaces to make it easier to work with distributed data and R
** Unify the interface
** Working group will run for a year. Get an API defined, get at least one open source reference implementations
** not everyone needs to work hands on. We will create smaller groups to focus on those aspects.
** We tried to get a diverse group of participants
* Logistics: meet monthly, focus groups may meet more often
* R Consoritum may be able to figure ways to fund smaller projects that come out of the working group
* Michael Kane: Should we start with an inventory of what is available and people are using?
** Michael Lawrence: Yes, we should find the collection of tools as well as the use cases that are common.
** Joe: I will figure out a wiki space.
* Javier: Who are the end users? Simon: Common layer needed to get algorithms working. We started from algos and tried to find the minimal common api. One of the goals is to make sure everyone is on the same page and not trying to create his/her own custom interface.
* Javier: Should we try to get people with more algo expertise?
* Joe: Simon do you have a stack diagram?
* Simon: Can we get R Consortium to help write things up and draw things?
* Next meeting: Javier is going to present SparkR next time.
a8e8a5c7f86407973bb93fa0ec2417ed7681db46
51
49
2016-11-10T21:14:05Z
MichaelLawrence
9
add Hossein Falaki
wikitext
text/x-wiki
== Goals and Purpose ==
The Distributed Computing Working Group will endorse the design of a common abstraction for distributed data structures in R. We aim to have at least one open-source implementation, as well as a SQL implementation, released within a year of forming the group.
== Members ==
* '''Michael Lawrence''' (Genentech)
* '''Indrajit Roy''' (HP Enterprise)
* ''Joe Rickert'' (ISC liason, RStudio)
* Bernd Bischl (LMU)
* Matt Dowle (H2O)
* Mario Inchiosa (Microsoft)
* Michael Kane (Yale)
* Javier Luraschi (RStudio)
* Edward Ma (HP Enterprise)
* Luke Tierney (University of Iowa)
* Simon Urbanek (AT&T)
* Bryan Lewis (Paradigm4)
* Hossein Falaki (databricks)
== Milestones ==
=== Achieved ===
* Adopt ddR as a prototype for a standard API for distributed computing in R
=== 2016 Internship ===
Clark Fitzgerald, a PhD student in the UC Davis Statistics department, worked on ddR and Spark integration.
* Wrote [https://github.com/clarkfitzg/sparklite sparklite] and [https://github.com/clarkfitzg/rddlist rddlist] as minimal proof-of-concept R packages to connect and store general data on Spark. [https://docs.google.com/presentation/d/1WfUQ2ockNku90GWMXonEhUEcVOWcgBmWwt5uYSSBYPY/edit?usp=sharing slides]
* [https://issues.apache.org/jira/browse/SPARK-16785 Patched SparkR] to allow user defined functions returning binary columns. This allows implementation of different data structures in SparkR.
* Updated [https://github.com/vertica/ddR/wiki/Design design documents] with suggested changes to DDR's internal design and object oriented model.
* Improved [https://github.com/vertica/ddR/pull/15 testing and ddR internals].
=== Outstanding ===
* Agree on a final standard API for distributed computing in R
* Implement at least one scalable backend based on an open-source technology like Spark, SQL, etc
== Open Questions ==
* How can we address the needs of both the end user data scientists and the algorithm implementers?
* How should we share data between R and a system like Spark?
* Is there any way to unify SparkR and sparklyr?
* Could we use the abstractions of tensorflow to partially or fully integrate with platforms like Spark?
== Minutes ==
=== 10/13/2016 ===
''Detailed minutes were not taken for this meeting''
* Mario Inchiosa: Microsoft's perspective on distributed computing with R
** Microsoft R Server: abstractions and algorithms for distributed computation on top of open-source R
** Desired features of a distributed API like ddR:
*** Supports PEMA (initialize, processData, updateResults, processResults)
*** Cross-platform
*** Fast runtime
*** Supports algorithm writer and data scientist
*** Comes with a comprehensive set of algorithms
*** Easy deployment
** ddR is making good progress but does not yet meet those requirements
* Indrajit: ddR progress report and next steps
** Recap of Clark's internship
** Next step: implement some of Clark's design suggestions: https://github.com/vertica/ddR/wiki/Design
** Spark integration will be based on sparklyr
** Should we limit Spark interaction to the DataFrame API or directly interact with RDDs?
*** Consensus: will likely need flexibility of RDDs to implement everything we need, e.g., arrays and lists
** Clark and Javier raised concerns about the scalability of sharing data between R and Spark
*** Michael: Spark is a platform in its own right, so interoperability is important, should figure something out
*** Bryan Lewis: Why not use tensor abstraction from tensorflow? Spark supports tensorflow and an R interface is already in the works.
** Michael raised the issue of additional funding from the R Consortium to continue Clark's work
*** Joe Rickert suggested that the working group develop one or more white papers summarizing the findings of the working group for presentation to the Infrastructure Steering Committee.
*** Consensus was in favor of this, and several pointed out that the progress so far has been worthwhile, despite not meeting the specific goals laid out in the proposal.
* Michael: do we want to invite some external speakers, one per meeting, from groups like databricks, tensorflow, etc?
** Consensus was in favor.
=== 9/8/2016 ===
''Detailed minutes were not taken for this meeting''
* Clark Fitzgerald: internship report
** Developed two packages for low-level Spark integration: rddlist, sparklite
** Patched a bug in Spark
** ddR needs refactoring before Spark integration is feasible:
*** dlist, dframe, and darray should be formal classes.
*** Partitions of data should be represented by a distributed list abstraction, and most functions (e.g., dmapply) should be implemented on top of that list.
* Javier: sparklyr update
** Preparing for CRAN release
** Mario: what happened to sparkapi?
*** Javier: sparkapi has been merged into sparklyr in order to avoid overhead of maintaining two packages. ddR can do everything it needs with sparklyr.
* Luke Tierney: Update on the low-level vector abstraction, which might support interfaces like ddR and sparklyr.
** Overall approach seems feasible, but still working out a few details.
** Will land in a branch soon.
* Bernd Bischl: update on the batchtools package
** Successor to BatchJobs based on in-memory database
=== 8/11/2016 ===
''Meeting was canceled due to lack of availability.''
=== 7/14/2016 ===
* Introduced Clark who is the intern funded by R Consortium. Clark is a graduate student from UC Davis. He will work on ddR integration with Spark and improving the core ddR API as well such as adding a distributed apply() for matrices, split function, etc.
* Bernd: Can I play around with ddR now? What backend should I use? How robust is the code?
** Clark: It's in good enough shape to be played around with. We will continue to improve it. Hopefully the spark integration will be done before the end of my internship in September.
* Q: Is anyone working on using ddR to make ML scale better.
** Indrajit: We have kmeans, glm, etc. already in CRAN.
** Michael Kane: We are working on glmnet and other packages related to algorithm development.
* Javier gave a demo of sparklyr and sparkapi.
** Motivation for the pacakage: The SparkR package overrides the dplyr interface. This is an issue for RStudio. SparkR is not a CRAN package which makes it difficult to add changes. dplyr is the most popular tool by RStudio and is broken on SparkR.
** Sparklyr provides a dplyr interface. It will also support ML like interfaces, such as consuming a ML model.
** Sparklyr does not currently support any distributed computing features. Instead we can recommend ddR as the distributed computing framework on top of sparkapi. We will put the code in CRAN in a couple of weeks.
** Simon: Can you talk more about the wrapper/low level API to work with Spark?
*** Javier: The package underneath the cover is called "sparkapi" it is to be used by pacakge builders. "spark_context()" and "invoke()" are the functionality to call scala methods. It does not you to currently run R user defined functions. I am currently working on enabling that feature. Depending upon the interest in using ddR with sparkapi, I can spend more time to make sparkapi feature rich.
** Indrajit: What versions of Spark are supported
*** Javier: Anything after 1.6
** Bernd: How do you export data?
*** Javier: We are using all the code from SparkR. So everything in SparkR should continue to work. We don't need to change SparkR. We just need to maintain compatibility.
** Bernd: What happens when the RDDs are very large?
*** Javier: Spark will spill on disk.
* Michael Kane: Presented examples that he implemented on ddR.
** Talked about how the different distributed packages compare to each other in terms of functionality.
** Michael K. looked at glm and truncated SVD on ddR. Was able to implement irls on ddR by implementing two distributed functions such as "cross". In truncated SVD only needed to overload two different distributed multiplications.
** Ran these algorithms on the 1000 genome dataset.
** Overall liked ddR since it was easy to implement the algorithms in the package.
** New ideas:
*** Trying to separate the data layer from the execution layer
*** Create an API that works on "chunks" (which is similar to the "parts" API in ddR). Would like to add these APIs to ddR.
*** Indrajit: You should be able to get some of the chunk like features by using parts and dmapply. E.g., you can call dmapply to read 10 different files, which correspond 10 chunks now. These are however wrapped as a darray or dframe. But you can continue to work on the individual chunks by using parts(i).
=== 6/2/2016 ===
* Round table introduction
* (Michael) Goals for the group:
** Make a common abstraction/interfaces to make it easier to work with distributed data and R
** Unify the interface
** Working group will run for a year. Get an API defined, get at least one open source reference implementations
** not everyone needs to work hands on. We will create smaller groups to focus on those aspects.
** We tried to get a diverse group of participants
* Logistics: meet monthly, focus groups may meet more often
* R Consoritum may be able to figure ways to fund smaller projects that come out of the working group
* Michael Kane: Should we start with an inventory of what is available and people are using?
** Michael Lawrence: Yes, we should find the collection of tools as well as the use cases that are common.
** Joe: I will figure out a wiki space.
* Javier: Who are the end users? Simon: Common layer needed to get algorithms working. We started from algos and tried to find the minimal common api. One of the goals is to make sure everyone is on the same page and not trying to create his/her own custom interface.
* Javier: Should we try to get people with more algo expertise?
* Joe: Simon do you have a stack diagram?
* Simon: Can we get R Consortium to help write things up and draw things?
* Next meeting: Javier is going to present SparkR next time.
79a4a47613792889357ff874f4f8dfd8f538b260
49
48
2016-10-24T23:45:46Z
Clarkfitzg
13
/* 2016 Internship */
wikitext
text/x-wiki
== Goals and Purpose ==
The Distributed Computing Working Group will endorse the design of a common abstraction for distributed data structures in R. We aim to have at least one open-source implementation, as well as a SQL implementation, released within a year of forming the group.
== Members ==
* '''Michael Lawrence''' (Genentech)
* '''Indrajit Roy''' (HP Enterprise)
* ''Joe Rickert'' (ISC liason, RStudio)
* Bernd Bischl (LMU)
* Matt Dowle (H2O)
* Mario Inchiosa (Microsoft)
* Michael Kane (Yale)
* Javier Luraschi (RStudio)
* Edward Ma (HP Enterprise)
* Luke Tierney (University of Iowa)
* Simon Urbanek (AT&T)
* Bryan Lewis (Paradigm4)
== Milestones ==
=== Achieved ===
* Adopt ddR as a prototype for a standard API for distributed computing in R
=== 2016 Internship ===
Clark Fitzgerald, a PhD student in the UC Davis Statistics department, worked on ddR and Spark integration.
* Wrote [https://github.com/clarkfitzg/sparklite sparklite] and [https://github.com/clarkfitzg/rddlist rddlist] as minimal proof-of-concept R packages to connect and store general data on Spark. [https://docs.google.com/presentation/d/1WfUQ2ockNku90GWMXonEhUEcVOWcgBmWwt5uYSSBYPY/edit?usp=sharing slides]
* [https://issues.apache.org/jira/browse/SPARK-16785 Patched SparkR] to allow user defined functions returning binary columns. This allows implementation of different data structures in SparkR.
* Updated [https://github.com/vertica/ddR/wiki/Design design documents] with suggested changes to DDR's internal design and object oriented model.
* Improved [https://github.com/vertica/ddR/pull/15 testing and ddR internals].
=== Outstanding ===
* Agree on a final standard API for distributed computing in R
* Implement at least one scalable backend based on an open-source technology like Spark, SQL, etc
== Open Questions ==
* How can we address the needs of both the end user data scientists and the algorithm implementers?
* How should we share data between R and a system like Spark?
* Is there any way to unify SparkR and sparklyr?
* Could we use the abstractions of tensorflow to partially or fully integrate with platforms like Spark?
== Minutes ==
=== 10/13/2016 ===
''Detailed minutes were not taken for this meeting''
* Mario Inchiosa: Microsoft's perspective on distributed computing with R
** Microsoft R Server: abstractions and algorithms for distributed computation on top of open-source R
** Desired features of a distributed API like ddR:
*** Supports PEMA (initialize, processData, updateResults, processResults)
*** Cross-platform
*** Fast runtime
*** Supports algorithm writer and data scientist
*** Comes with a comprehensive set of algorithms
*** Easy deployment
** ddR is making good progress but does not yet meet those requirements
* Indrajit: ddR progress report and next steps
** Recap of Clark's internship
** Next step: implement some of Clark's design suggestions: https://github.com/vertica/ddR/wiki/Design
** Spark integration will be based on sparklyr
** Should we limit Spark interaction to the DataFrame API or directly interact with RDDs?
*** Consensus: will likely need flexibility of RDDs to implement everything we need, e.g., arrays and lists
** Clark and Javier raised concerns about the scalability of sharing data between R and Spark
*** Michael: Spark is a platform in its own right, so interoperability is important, should figure something out
*** Bryan Lewis: Why not use tensor abstraction from tensorflow? Spark supports tensorflow and an R interface is already in the works.
** Michael raised the issue of additional funding from the R Consortium to continue Clark's work
*** Joe Rickert suggested that the working group develop one or more white papers summarizing the findings of the working group for presentation to the Infrastructure Steering Committee.
*** Consensus was in favor of this, and several pointed out that the progress so far has been worthwhile, despite not meeting the specific goals laid out in the proposal.
* Michael: do we want to invite some external speakers, one per meeting, from groups like databricks, tensorflow, etc?
** Consensus was in favor.
=== 9/8/2016 ===
''Detailed minutes were not taken for this meeting''
* Clark Fitzgerald: internship report
** Developed two packages for low-level Spark integration: rddlist, sparklite
** Patched a bug in Spark
** ddR needs refactoring before Spark integration is feasible:
*** dlist, dframe, and darray should be formal classes.
*** Partitions of data should be represented by a distributed list abstraction, and most functions (e.g., dmapply) should be implemented on top of that list.
* Javier: sparklyr update
** Preparing for CRAN release
** Mario: what happened to sparkapi?
*** Javier: sparkapi has been merged into sparklyr in order to avoid overhead of maintaining two packages. ddR can do everything it needs with sparklyr.
* Luke Tierney: Update on the low-level vector abstraction, which might support interfaces like ddR and sparklyr.
** Overall approach seems feasible, but still working out a few details.
** Will land in a branch soon.
* Bernd Bischl: update on the batchtools package
** Successor to BatchJobs based on in-memory database
=== 8/11/2016 ===
''Meeting was canceled due to lack of availability.''
=== 7/14/2016 ===
* Introduced Clark who is the intern funded by R Consortium. Clark is a graduate student from UC Davis. He will work on ddR integration with Spark and improving the core ddR API as well such as adding a distributed apply() for matrices, split function, etc.
* Bernd: Can I play around with ddR now? What backend should I use? How robust is the code?
** Clark: It's in good enough shape to be played around with. We will continue to improve it. Hopefully the spark integration will be done before the end of my internship in September.
* Q: Is anyone working on using ddR to make ML scale better.
** Indrajit: We have kmeans, glm, etc. already in CRAN.
** Michael Kane: We are working on glmnet and other packages related to algorithm development.
* Javier gave a demo of sparklyr and sparkapi.
** Motivation for the pacakage: The SparkR package overrides the dplyr interface. This is an issue for RStudio. SparkR is not a CRAN package which makes it difficult to add changes. dplyr is the most popular tool by RStudio and is broken on SparkR.
** Sparklyr provides a dplyr interface. It will also support ML like interfaces, such as consuming a ML model.
** Sparklyr does not currently support any distributed computing features. Instead we can recommend ddR as the distributed computing framework on top of sparkapi. We will put the code in CRAN in a couple of weeks.
** Simon: Can you talk more about the wrapper/low level API to work with Spark?
*** Javier: The package underneath the cover is called "sparkapi" it is to be used by pacakge builders. "spark_context()" and "invoke()" are the functionality to call scala methods. It does not you to currently run R user defined functions. I am currently working on enabling that feature. Depending upon the interest in using ddR with sparkapi, I can spend more time to make sparkapi feature rich.
** Indrajit: What versions of Spark are supported
*** Javier: Anything after 1.6
** Bernd: How do you export data?
*** Javier: We are using all the code from SparkR. So everything in SparkR should continue to work. We don't need to change SparkR. We just need to maintain compatibility.
** Bernd: What happens when the RDDs are very large?
*** Javier: Spark will spill on disk.
* Michael Kane: Presented examples that he implemented on ddR.
** Talked about how the different distributed packages compare to each other in terms of functionality.
** Michael K. looked at glm and truncated SVD on ddR. Was able to implement irls on ddR by implementing two distributed functions such as "cross". In truncated SVD only needed to overload two different distributed multiplications.
** Ran these algorithms on the 1000 genome dataset.
** Overall liked ddR since it was easy to implement the algorithms in the package.
** New ideas:
*** Trying to separate the data layer from the execution layer
*** Create an API that works on "chunks" (which is similar to the "parts" API in ddR). Would like to add these APIs to ddR.
*** Indrajit: You should be able to get some of the chunk like features by using parts and dmapply. E.g., you can call dmapply to read 10 different files, which correspond 10 chunks now. These are however wrapped as a darray or dframe. But you can continue to work on the individual chunks by using parts(i).
=== 6/2/2016 ===
* Round table introduction
* (Michael) Goals for the group:
** Make a common abstraction/interfaces to make it easier to work with distributed data and R
** Unify the interface
** Working group will run for a year. Get an API defined, get at least one open source reference implementations
** not everyone needs to work hands on. We will create smaller groups to focus on those aspects.
** We tried to get a diverse group of participants
* Logistics: meet monthly, focus groups may meet more often
* R Consoritum may be able to figure ways to fund smaller projects that come out of the working group
* Michael Kane: Should we start with an inventory of what is available and people are using?
** Michael Lawrence: Yes, we should find the collection of tools as well as the use cases that are common.
** Joe: I will figure out a wiki space.
* Javier: Who are the end users? Simon: Common layer needed to get algorithms working. We started from algos and tried to find the minimal common api. One of the goals is to make sure everyone is on the same page and not trying to create his/her own custom interface.
* Javier: Should we try to get people with more algo expertise?
* Joe: Simon do you have a stack diagram?
* Simon: Can we get R Consortium to help write things up and draw things?
* Next meeting: Javier is going to present SparkR next time.
238f3d2a58158ca605d909fa25dceff075e9f9a0
48
46
2016-10-24T23:36:59Z
Clarkfitzg
13
/* Milestones */
wikitext
text/x-wiki
== Goals and Purpose ==
The Distributed Computing Working Group will endorse the design of a common abstraction for distributed data structures in R. We aim to have at least one open-source implementation, as well as a SQL implementation, released within a year of forming the group.
== Members ==
* '''Michael Lawrence''' (Genentech)
* '''Indrajit Roy''' (HP Enterprise)
* ''Joe Rickert'' (ISC liason, RStudio)
* Bernd Bischl (LMU)
* Matt Dowle (H2O)
* Mario Inchiosa (Microsoft)
* Michael Kane (Yale)
* Javier Luraschi (RStudio)
* Edward Ma (HP Enterprise)
* Luke Tierney (University of Iowa)
* Simon Urbanek (AT&T)
* Bryan Lewis (Paradigm4)
== Milestones ==
=== Achieved ===
* Adopt ddR as a prototype for a standard API for distributed computing in R
=== 2016 Internship ===
Clark Fitzgerald, a PhD student in the UC Davis Statistics department, worked on ddR and Spark integration.
* Wrote [https://github.com/clarkfitzg/sparklite sparklite] and
* [https://issues.apache.org/jira/browse/SPARK-16785 Patched SparkR] to allow user defined functions returning binary columns. This allows implementation of different data structures in SparkR.
* Updated [https://github.com/vertica/ddR/wiki/Design design documents] with suggested changes to DDR's internal design and object oriented model.
* Began
=== Outstanding ===
* Agree on a final standard API for distributed computing in R
* Implement at least one scalable backend based on an open-source technology like Spark, SQL, etc
== Open Questions ==
* How can we address the needs of both the end user data scientists and the algorithm implementers?
* How should we share data between R and a system like Spark?
* Is there any way to unify SparkR and sparklyr?
* Could we use the abstractions of tensorflow to partially or fully integrate with platforms like Spark?
== Minutes ==
=== 10/13/2016 ===
''Detailed minutes were not taken for this meeting''
* Mario Inchiosa: Microsoft's perspective on distributed computing with R
** Microsoft R Server: abstractions and algorithms for distributed computation on top of open-source R
** Desired features of a distributed API like ddR:
*** Supports PEMA (initialize, processData, updateResults, processResults)
*** Cross-platform
*** Fast runtime
*** Supports algorithm writer and data scientist
*** Comes with a comprehensive set of algorithms
*** Easy deployment
** ddR is making good progress but does not yet meet those requirements
* Indrajit: ddR progress report and next steps
** Recap of Clark's internship
** Next step: implement some of Clark's design suggestions: https://github.com/vertica/ddR/wiki/Design
** Spark integration will be based on sparklyr
** Should we limit Spark interaction to the DataFrame API or directly interact with RDDs?
*** Consensus: will likely need flexibility of RDDs to implement everything we need, e.g., arrays and lists
** Clark and Javier raised concerns about the scalability of sharing data between R and Spark
*** Michael: Spark is a platform in its own right, so interoperability is important, should figure something out
*** Bryan Lewis: Why not use tensor abstraction from tensorflow? Spark supports tensorflow and an R interface is already in the works.
** Michael raised the issue of additional funding from the R Consortium to continue Clark's work
*** Joe Rickert suggested that the working group develop one or more white papers summarizing the findings of the working group for presentation to the Infrastructure Steering Committee.
*** Consensus was in favor of this, and several pointed out that the progress so far has been worthwhile, despite not meeting the specific goals laid out in the proposal.
* Michael: do we want to invite some external speakers, one per meeting, from groups like databricks, tensorflow, etc?
** Consensus was in favor.
=== 9/8/2016 ===
''Detailed minutes were not taken for this meeting''
* Clark Fitzgerald: internship report
** Developed two packages for low-level Spark integration: rddlist, sparklite
** Patched a bug in Spark
** ddR needs refactoring before Spark integration is feasible:
*** dlist, dframe, and darray should be formal classes.
*** Partitions of data should be represented by a distributed list abstraction, and most functions (e.g., dmapply) should be implemented on top of that list.
* Javier: sparklyr update
** Preparing for CRAN release
** Mario: what happened to sparkapi?
*** Javier: sparkapi has been merged into sparklyr in order to avoid overhead of maintaining two packages. ddR can do everything it needs with sparklyr.
* Luke Tierney: Update on the low-level vector abstraction, which might support interfaces like ddR and sparklyr.
** Overall approach seems feasible, but still working out a few details.
** Will land in a branch soon.
* Bernd Bischl: update on the batchtools package
** Successor to BatchJobs based on in-memory database
=== 8/11/2016 ===
''Meeting was canceled due to lack of availability.''
=== 7/14/2016 ===
* Introduced Clark who is the intern funded by R Consortium. Clark is a graduate student from UC Davis. He will work on ddR integration with Spark and improving the core ddR API as well such as adding a distributed apply() for matrices, split function, etc.
* Bernd: Can I play around with ddR now? What backend should I use? How robust is the code?
** Clark: It's in good enough shape to be played around with. We will continue to improve it. Hopefully the spark integration will be done before the end of my internship in September.
* Q: Is anyone working on using ddR to make ML scale better.
** Indrajit: We have kmeans, glm, etc. already in CRAN.
** Michael Kane: We are working on glmnet and other packages related to algorithm development.
* Javier gave a demo of sparklyr and sparkapi.
** Motivation for the pacakage: The SparkR package overrides the dplyr interface. This is an issue for RStudio. SparkR is not a CRAN package which makes it difficult to add changes. dplyr is the most popular tool by RStudio and is broken on SparkR.
** Sparklyr provides a dplyr interface. It will also support ML like interfaces, such as consuming a ML model.
** Sparklyr does not currently support any distributed computing features. Instead we can recommend ddR as the distributed computing framework on top of sparkapi. We will put the code in CRAN in a couple of weeks.
** Simon: Can you talk more about the wrapper/low level API to work with Spark?
*** Javier: The package underneath the cover is called "sparkapi" it is to be used by pacakge builders. "spark_context()" and "invoke()" are the functionality to call scala methods. It does not you to currently run R user defined functions. I am currently working on enabling that feature. Depending upon the interest in using ddR with sparkapi, I can spend more time to make sparkapi feature rich.
** Indrajit: What versions of Spark are supported
*** Javier: Anything after 1.6
** Bernd: How do you export data?
*** Javier: We are using all the code from SparkR. So everything in SparkR should continue to work. We don't need to change SparkR. We just need to maintain compatibility.
** Bernd: What happens when the RDDs are very large?
*** Javier: Spark will spill on disk.
* Michael Kane: Presented examples that he implemented on ddR.
** Talked about how the different distributed packages compare to each other in terms of functionality.
** Michael K. looked at glm and truncated SVD on ddR. Was able to implement irls on ddR by implementing two distributed functions such as "cross". In truncated SVD only needed to overload two different distributed multiplications.
** Ran these algorithms on the 1000 genome dataset.
** Overall liked ddR since it was easy to implement the algorithms in the package.
** New ideas:
*** Trying to separate the data layer from the execution layer
*** Create an API that works on "chunks" (which is similar to the "parts" API in ddR). Would like to add these APIs to ddR.
*** Indrajit: You should be able to get some of the chunk like features by using parts and dmapply. E.g., you can call dmapply to read 10 different files, which correspond 10 chunks now. These are however wrapped as a darray or dframe. But you can continue to work on the individual chunks by using parts(i).
=== 6/2/2016 ===
* Round table introduction
* (Michael) Goals for the group:
** Make a common abstraction/interfaces to make it easier to work with distributed data and R
** Unify the interface
** Working group will run for a year. Get an API defined, get at least one open source reference implementations
** not everyone needs to work hands on. We will create smaller groups to focus on those aspects.
** We tried to get a diverse group of participants
* Logistics: meet monthly, focus groups may meet more often
* R Consoritum may be able to figure ways to fund smaller projects that come out of the working group
* Michael Kane: Should we start with an inventory of what is available and people are using?
** Michael Lawrence: Yes, we should find the collection of tools as well as the use cases that are common.
** Joe: I will figure out a wiki space.
* Javier: Who are the end users? Simon: Common layer needed to get algorithms working. We started from algos and tried to find the minimal common api. One of the goals is to make sure everyone is on the same page and not trying to create his/her own custom interface.
* Javier: Should we try to get people with more algo expertise?
* Joe: Simon do you have a stack diagram?
* Simon: Can we get R Consortium to help write things up and draw things?
* Next meeting: Javier is going to present SparkR next time.
5cee7c740dac6df93c25d1ac5d908bd54f2eed12
46
45
2016-10-18T16:52:07Z
MichaelLawrence
9
change Joe's affiliation
wikitext
text/x-wiki
== Goals and Purpose ==
The Distributed Computing Working Group will endorse the design of a common abstraction for distributed data structures in R. We aim to have at least one open-source implementation, as well as a SQL implementation, released within a year of forming the group.
== Members ==
* '''Michael Lawrence''' (Genentech)
* '''Indrajit Roy''' (HP Enterprise)
* ''Joe Rickert'' (ISC liason, RStudio)
* Bernd Bischl (LMU)
* Matt Dowle (H2O)
* Mario Inchiosa (Microsoft)
* Michael Kane (Yale)
* Javier Luraschi (RStudio)
* Edward Ma (HP Enterprise)
* Luke Tierney (University of Iowa)
* Simon Urbanek (AT&T)
* Bryan Lewis (Paradigm4)
== Milestones ==
=== Achieved ===
* Adopt ddR as a prototype for a standard API for distributed computing in R
* Explore the interfacing of R and Spark in the context of the ddR package (Clark Fitzgerald)
=== Outstanding ===
* Agree on a final standard API for distributed computing in R
* Implement at least one scalable backend based on an open-source technology like Spark, SQL, etc
== Open Questions ==
* How can we address the needs of both the end user data scientists and the algorithm implementers?
* How should we share data between R and a system like Spark?
* Is there any way to unify SparkR and sparklyr?
* Could we use the abstractions of tensorflow to partially or fully integrate with platforms like Spark?
== Minutes ==
=== 10/13/2016 ===
''Detailed minutes were not taken for this meeting''
* Mario Inchiosa: Microsoft's perspective on distributed computing with R
** Microsoft R Server: abstractions and algorithms for distributed computation on top of open-source R
** Desired features of a distributed API like ddR:
*** Supports PEMA (initialize, processData, updateResults, processResults)
*** Cross-platform
*** Fast runtime
*** Supports algorithm writer and data scientist
*** Comes with a comprehensive set of algorithms
*** Easy deployment
** ddR is making good progress but does not yet meet those requirements
* Indrajit: ddR progress report and next steps
** Recap of Clark's internship
** Next step: implement some of Clark's design suggestions: https://github.com/vertica/ddR/wiki/Design
** Spark integration will be based on sparklyr
** Should we limit Spark interaction to the DataFrame API or directly interact with RDDs?
*** Consensus: will likely need flexibility of RDDs to implement everything we need, e.g., arrays and lists
** Clark and Javier raised concerns about the scalability of sharing data between R and Spark
*** Michael: Spark is a platform in its own right, so interoperability is important, should figure something out
*** Bryan Lewis: Why not use tensor abstraction from tensorflow? Spark supports tensorflow and an R interface is already in the works.
** Michael raised the issue of additional funding from the R Consortium to continue Clark's work
*** Joe Rickert suggested that the working group develop one or more white papers summarizing the findings of the working group for presentation to the Infrastructure Steering Committee.
*** Consensus was in favor of this, and several pointed out that the progress so far has been worthwhile, despite not meeting the specific goals laid out in the proposal.
* Michael: do we want to invite some external speakers, one per meeting, from groups like databricks, tensorflow, etc?
** Consensus was in favor.
=== 9/8/2016 ===
''Detailed minutes were not taken for this meeting''
* Clark Fitzgerald: internship report
** Developed two packages for low-level Spark integration: rddlist, sparklite
** Patched a bug in Spark
** ddR needs refactoring before Spark integration is feasible:
*** dlist, dframe, and darray should be formal classes.
*** Partitions of data should be represented by a distributed list abstraction, and most functions (e.g., dmapply) should be implemented on top of that list.
* Javier: sparklyr update
** Preparing for CRAN release
** Mario: what happened to sparkapi?
*** Javier: sparkapi has been merged into sparklyr in order to avoid overhead of maintaining two packages. ddR can do everything it needs with sparklyr.
* Luke Tierney: Update on the low-level vector abstraction, which might support interfaces like ddR and sparklyr.
** Overall approach seems feasible, but still working out a few details.
** Will land in a branch soon.
* Bernd Bischl: update on the batchtools package
** Successor to BatchJobs based on in-memory database
=== 8/11/2016 ===
''Meeting was canceled due to lack of availability.''
=== 7/14/2016 ===
* Introduced Clark who is the intern funded by R Consortium. Clark is a graduate student from UC Davis. He will work on ddR integration with Spark and improving the core ddR API as well such as adding a distributed apply() for matrices, split function, etc.
* Bernd: Can I play around with ddR now? What backend should I use? How robust is the code?
** Clark: It's in good enough shape to be played around with. We will continue to improve it. Hopefully the spark integration will be done before the end of my internship in September.
* Q: Is anyone working on using ddR to make ML scale better.
** Indrajit: We have kmeans, glm, etc. already in CRAN.
** Michael Kane: We are working on glmnet and other packages related to algorithm development.
* Javier gave a demo of sparklyr and sparkapi.
** Motivation for the pacakage: The SparkR package overrides the dplyr interface. This is an issue for RStudio. SparkR is not a CRAN package which makes it difficult to add changes. dplyr is the most popular tool by RStudio and is broken on SparkR.
** Sparklyr provides a dplyr interface. It will also support ML like interfaces, such as consuming a ML model.
** Sparklyr does not currently support any distributed computing features. Instead we can recommend ddR as the distributed computing framework on top of sparkapi. We will put the code in CRAN in a couple of weeks.
** Simon: Can you talk more about the wrapper/low level API to work with Spark?
*** Javier: The package underneath the cover is called "sparkapi" it is to be used by pacakge builders. "spark_context()" and "invoke()" are the functionality to call scala methods. It does not you to currently run R user defined functions. I am currently working on enabling that feature. Depending upon the interest in using ddR with sparkapi, I can spend more time to make sparkapi feature rich.
** Indrajit: What versions of Spark are supported
*** Javier: Anything after 1.6
** Bernd: How do you export data?
*** Javier: We are using all the code from SparkR. So everything in SparkR should continue to work. We don't need to change SparkR. We just need to maintain compatibility.
** Bernd: What happens when the RDDs are very large?
*** Javier: Spark will spill on disk.
* Michael Kane: Presented examples that he implemented on ddR.
** Talked about how the different distributed packages compare to each other in terms of functionality.
** Michael K. looked at glm and truncated SVD on ddR. Was able to implement irls on ddR by implementing two distributed functions such as "cross". In truncated SVD only needed to overload two different distributed multiplications.
** Ran these algorithms on the 1000 genome dataset.
** Overall liked ddR since it was easy to implement the algorithms in the package.
** New ideas:
*** Trying to separate the data layer from the execution layer
*** Create an API that works on "chunks" (which is similar to the "parts" API in ddR). Would like to add these APIs to ddR.
*** Indrajit: You should be able to get some of the chunk like features by using parts and dmapply. E.g., you can call dmapply to read 10 different files, which correspond 10 chunks now. These are however wrapped as a darray or dframe. But you can continue to work on the individual chunks by using parts(i).
=== 6/2/2016 ===
* Round table introduction
* (Michael) Goals for the group:
** Make a common abstraction/interfaces to make it easier to work with distributed data and R
** Unify the interface
** Working group will run for a year. Get an API defined, get at least one open source reference implementations
** not everyone needs to work hands on. We will create smaller groups to focus on those aspects.
** We tried to get a diverse group of participants
* Logistics: meet monthly, focus groups may meet more often
* R Consoritum may be able to figure ways to fund smaller projects that come out of the working group
* Michael Kane: Should we start with an inventory of what is available and people are using?
** Michael Lawrence: Yes, we should find the collection of tools as well as the use cases that are common.
** Joe: I will figure out a wiki space.
* Javier: Who are the end users? Simon: Common layer needed to get algorithms working. We started from algos and tried to find the minimal common api. One of the goals is to make sure everyone is on the same page and not trying to create his/her own custom interface.
* Javier: Should we try to get people with more algo expertise?
* Joe: Simon do you have a stack diagram?
* Simon: Can we get R Consortium to help write things up and draw things?
* Next meeting: Javier is going to present SparkR next time.
fa35c57132996d4d6d161d54fb2414d7e51a56a1
45
44
2016-10-18T16:27:16Z
MichaelLawrence
9
wikitext
text/x-wiki
== Goals and Purpose ==
The Distributed Computing Working Group will endorse the design of a common abstraction for distributed data structures in R. We aim to have at least one open-source implementation, as well as a SQL implementation, released within a year of forming the group.
== Members ==
* '''Michael Lawrence''' (Genentech)
* '''Indrajit Roy''' (HP Enterprise)
* ''Joe Rickert'' (Microsoft)
* Bernd Bischl (LMU)
* Matt Dowle (H2O)
* Mario Inchiosa (Microsoft)
* Michael Kane (Yale)
* Javier Luraschi (RStudio)
* Edward Ma (HP Enterprise)
* Luke Tierney (University of Iowa)
* Simon Urbanek (AT&T)
* Bryan Lewis (Paradigm4)
== Milestones ==
=== Achieved ===
* Adopt ddR as a prototype for a standard API for distributed computing in R
* Explore the interfacing of R and Spark in the context of the ddR package (Clark Fitzgerald)
=== Outstanding ===
* Agree on a final standard API for distributed computing in R
* Implement at least one scalable backend based on an open-source technology like Spark, SQL, etc
== Open Questions ==
* How can we address the needs of both the end user data scientists and the algorithm implementers?
* How should we share data between R and a system like Spark?
* Is there any way to unify SparkR and sparklyr?
* Could we use the abstractions of tensorflow to partially or fully integrate with platforms like Spark?
== Minutes ==
=== 10/13/2016 ===
''Detailed minutes were not taken for this meeting''
* Mario Inchiosa: Microsoft's perspective on distributed computing with R
** Microsoft R Server: abstractions and algorithms for distributed computation on top of open-source R
** Desired features of a distributed API like ddR:
*** Supports PEMA (initialize, processData, updateResults, processResults)
*** Cross-platform
*** Fast runtime
*** Supports algorithm writer and data scientist
*** Comes with a comprehensive set of algorithms
*** Easy deployment
** ddR is making good progress but does not yet meet those requirements
* Indrajit: ddR progress report and next steps
** Recap of Clark's internship
** Next step: implement some of Clark's design suggestions: https://github.com/vertica/ddR/wiki/Design
** Spark integration will be based on sparklyr
** Should we limit Spark interaction to the DataFrame API or directly interact with RDDs?
*** Consensus: will likely need flexibility of RDDs to implement everything we need, e.g., arrays and lists
** Clark and Javier raised concerns about the scalability of sharing data between R and Spark
*** Michael: Spark is a platform in its own right, so interoperability is important, should figure something out
*** Bryan Lewis: Why not use tensor abstraction from tensorflow? Spark supports tensorflow and an R interface is already in the works.
** Michael raised the issue of additional funding from the R Consortium to continue Clark's work
*** Joe Rickert suggested that the working group develop one or more white papers summarizing the findings of the working group for presentation to the Infrastructure Steering Committee.
*** Consensus was in favor of this, and several pointed out that the progress so far has been worthwhile, despite not meeting the specific goals laid out in the proposal.
* Michael: do we want to invite some external speakers, one per meeting, from groups like databricks, tensorflow, etc?
** Consensus was in favor.
=== 9/8/2016 ===
''Detailed minutes were not taken for this meeting''
* Clark Fitzgerald: internship report
** Developed two packages for low-level Spark integration: rddlist, sparklite
** Patched a bug in Spark
** ddR needs refactoring before Spark integration is feasible:
*** dlist, dframe, and darray should be formal classes.
*** Partitions of data should be represented by a distributed list abstraction, and most functions (e.g., dmapply) should be implemented on top of that list.
* Javier: sparklyr update
** Preparing for CRAN release
** Mario: what happened to sparkapi?
*** Javier: sparkapi has been merged into sparklyr in order to avoid overhead of maintaining two packages. ddR can do everything it needs with sparklyr.
* Luke Tierney: Update on the low-level vector abstraction, which might support interfaces like ddR and sparklyr.
** Overall approach seems feasible, but still working out a few details.
** Will land in a branch soon.
* Bernd Bischl: update on the batchtools package
** Successor to BatchJobs based on in-memory database
=== 8/11/2016 ===
''Meeting was canceled due to lack of availability.''
=== 7/14/2016 ===
* Introduced Clark who is the intern funded by R Consortium. Clark is a graduate student from UC Davis. He will work on ddR integration with Spark and improving the core ddR API as well such as adding a distributed apply() for matrices, split function, etc.
* Bernd: Can I play around with ddR now? What backend should I use? How robust is the code?
** Clark: It's in good enough shape to be played around with. We will continue to improve it. Hopefully the spark integration will be done before the end of my internship in September.
* Q: Is anyone working on using ddR to make ML scale better.
** Indrajit: We have kmeans, glm, etc. already in CRAN.
** Michael Kane: We are working on glmnet and other packages related to algorithm development.
* Javier gave a demo of sparklyr and sparkapi.
** Motivation for the pacakage: The SparkR package overrides the dplyr interface. This is an issue for RStudio. SparkR is not a CRAN package which makes it difficult to add changes. dplyr is the most popular tool by RStudio and is broken on SparkR.
** Sparklyr provides a dplyr interface. It will also support ML like interfaces, such as consuming a ML model.
** Sparklyr does not currently support any distributed computing features. Instead we can recommend ddR as the distributed computing framework on top of sparkapi. We will put the code in CRAN in a couple of weeks.
** Simon: Can you talk more about the wrapper/low level API to work with Spark?
*** Javier: The package underneath the cover is called "sparkapi" it is to be used by pacakge builders. "spark_context()" and "invoke()" are the functionality to call scala methods. It does not you to currently run R user defined functions. I am currently working on enabling that feature. Depending upon the interest in using ddR with sparkapi, I can spend more time to make sparkapi feature rich.
** Indrajit: What versions of Spark are supported
*** Javier: Anything after 1.6
** Bernd: How do you export data?
*** Javier: We are using all the code from SparkR. So everything in SparkR should continue to work. We don't need to change SparkR. We just need to maintain compatibility.
** Bernd: What happens when the RDDs are very large?
*** Javier: Spark will spill on disk.
* Michael Kane: Presented examples that he implemented on ddR.
** Talked about how the different distributed packages compare to each other in terms of functionality.
** Michael K. looked at glm and truncated SVD on ddR. Was able to implement irls on ddR by implementing two distributed functions such as "cross". In truncated SVD only needed to overload two different distributed multiplications.
** Ran these algorithms on the 1000 genome dataset.
** Overall liked ddR since it was easy to implement the algorithms in the package.
** New ideas:
*** Trying to separate the data layer from the execution layer
*** Create an API that works on "chunks" (which is similar to the "parts" API in ddR). Would like to add these APIs to ddR.
*** Indrajit: You should be able to get some of the chunk like features by using parts and dmapply. E.g., you can call dmapply to read 10 different files, which correspond 10 chunks now. These are however wrapped as a darray or dframe. But you can continue to work on the individual chunks by using parts(i).
=== 6/2/2016 ===
* Round table introduction
* (Michael) Goals for the group:
** Make a common abstraction/interfaces to make it easier to work with distributed data and R
** Unify the interface
** Working group will run for a year. Get an API defined, get at least one open source reference implementations
** not everyone needs to work hands on. We will create smaller groups to focus on those aspects.
** We tried to get a diverse group of participants
* Logistics: meet monthly, focus groups may meet more often
* R Consoritum may be able to figure ways to fund smaller projects that come out of the working group
* Michael Kane: Should we start with an inventory of what is available and people are using?
** Michael Lawrence: Yes, we should find the collection of tools as well as the use cases that are common.
** Joe: I will figure out a wiki space.
* Javier: Who are the end users? Simon: Common layer needed to get algorithms working. We started from algos and tried to find the minimal common api. One of the goals is to make sure everyone is on the same page and not trying to create his/her own custom interface.
* Javier: Should we try to get people with more algo expertise?
* Joe: Simon do you have a stack diagram?
* Simon: Can we get R Consortium to help write things up and draw things?
* Next meeting: Javier is going to present SparkR next time.
f74c61cd75d285a65ff8883c1e0dfc0cf12ecb4e
44
43
2016-10-18T11:52:48Z
MichaelLawrence
9
October minutes
wikitext
text/x-wiki
== Goals and Purpose ==
The Distributed Computing Working Group will endorse the design of a common abstraction for distributed data structures in R. We aim to have at least one open-source implementation, as well as a SQL implementation, released within a year of forming the group.
== Members ==
* '''Michael Lawrence''' (Genentech)
* '''Indrajit Roy''' (HP Enterprise)
* ''Joe Rickert'' (Microsoft)
* Bernd Bischl (LMU)
* Matt Dowle (H2O)
* Mario Inchiosa (Microsoft)
* Michael Kane (Yale)
* Javier Luraschi (RStudio)
* Edward Ma (HP Enterprise)
* Luke Tierney (University of Iowa)
* Simon Urbanek (AT&T)
== Minutes ==
=== 10/13/2016 ===
''Detailed minutes were not taken for this meeting''
* Mario Inchiosa: Microsoft's perspective on distributed computing with R
** Microsoft R Server: abstractions and algorithms for distributed computation on top of open-source R
** Desired features of a distributed API like ddR:
*** Supports PEMA (initialize, processData, updateResults, processResults)
*** Cross-platform
*** Fast runtime
*** Supports algorithm writer and data scientist
*** Comes with a comprehensive set of algorithms
*** Easy deployment
** ddR is making good progress but does not yet meet those requirements
* Indrajit: ddR progress report and next steps
** Recap of Clark's internship
** Next step: implement some of Clark's design suggestions: https://github.com/vertica/ddR/wiki/Design
** Spark integration will be based on sparklyr
** Should we limit Spark interaction to the DataFrame API or directly interact with RDDs?
*** Consensus: will likely need flexibility of RDDs to implement everything we need, e.g., arrays and lists
** Clark and Javier raised concerns about the scalability of sharing data between R and Spark
*** Michael: Spark is a platform in its own right, so interoperability is important, should figure something out
*** Bryan Lewis: Why not use tensor abstraction from tensorflow? Spark supports tensorflow and an R interface is already in the works.
** Michael raised the issue of additional funding from the R Consortium to continue Clark's work
*** Joe Rickert suggested that the working group develop one or more white papers summarizing the findings of the working group for presentation to the Infrastructure Steering Committee.
*** Consensus was in favor of this, and several pointed out that the progress so far has been worthwhile, despite not meeting the specific goals laid out in the proposal.
* Michael: do we want to invite some external speakers, one per meeting, from groups like databricks, tensorflow, etc?
** Consensus was in favor.
=== 9/8/2016 ===
''Detailed minutes were not taken for this meeting''
* Clark Fitzgerald: internship report
** Developed two packages for low-level Spark integration: rddlist, sparklite
** Patched a bug in Spark
** ddR needs refactoring before Spark integration is feasible:
*** dlist, dframe, and darray should be formal classes.
*** Partitions of data should be represented by a distributed list abstraction, and most functions (e.g., dmapply) should be implemented on top of that list.
* Javier: sparklyr update
** Preparing for CRAN release
** Mario: what happened to sparkapi?
*** Javier: sparkapi has been merged into sparklyr in order to avoid overhead of maintaining two packages. ddR can do everything it needs with sparklyr.
* Luke Tierney: Update on the low-level vector abstraction, which might support interfaces like ddR and sparklyr.
** Overall approach seems feasible, but still working out a few details.
** Will land in a branch soon.
* Bernd Bischl: update on the batchtools package
** Successor to BatchJobs based on in-memory database
=== 8/11/2016 ===
''Meeting was canceled due to lack of availability.''
=== 7/14/2016 ===
* Introduced Clark who is the intern funded by R Consortium. Clark is a graduate student from UC Davis. He will work on ddR integration with Spark and improving the core ddR API as well such as adding a distributed apply() for matrices, split function, etc.
* Bernd: Can I play around with ddR now? What backend should I use? How robust is the code?
** Clark: It's in good enough shape to be played around with. We will continue to improve it. Hopefully the spark integration will be done before the end of my internship in September.
* Q: Is anyone working on using ddR to make ML scale better.
** Indrajit: We have kmeans, glm, etc. already in CRAN.
** Michael Kane: We are working on glmnet and other packages related to algorithm development.
* Javier gave a demo of sparklyr and sparkapi.
** Motivation for the pacakage: The SparkR package overrides the dplyr interface. This is an issue for RStudio. SparkR is not a CRAN package which makes it difficult to add changes. dplyr is the most popular tool by RStudio and is broken on SparkR.
** Sparklyr provides a dplyr interface. It will also support ML like interfaces, such as consuming a ML model.
** Sparklyr does not currently support any distributed computing features. Instead we can recommend ddR as the distributed computing framework on top of sparkapi. We will put the code in CRAN in a couple of weeks.
** Simon: Can you talk more about the wrapper/low level API to work with Spark?
*** Javier: The package underneath the cover is called "sparkapi" it is to be used by pacakge builders. "spark_context()" and "invoke()" are the functionality to call scala methods. It does not you to currently run R user defined functions. I am currently working on enabling that feature. Depending upon the interest in using ddR with sparkapi, I can spend more time to make sparkapi feature rich.
** Indrajit: What versions of Spark are supported
*** Javier: Anything after 1.6
** Bernd: How do you export data?
*** Javier: We are using all the code from SparkR. So everything in SparkR should continue to work. We don't need to change SparkR. We just need to maintain compatibility.
** Bernd: What happens when the RDDs are very large?
*** Javier: Spark will spill on disk.
* Michael Kane: Presented examples that he implemented on ddR.
** Talked about how the different distributed packages compare to each other in terms of functionality.
** Michael K. looked at glm and truncated SVD on ddR. Was able to implement irls on ddR by implementing two distributed functions such as "cross". In truncated SVD only needed to overload two different distributed multiplications.
** Ran these algorithms on the 1000 genome dataset.
** Overall liked ddR since it was easy to implement the algorithms in the package.
** New ideas:
*** Trying to separate the data layer from the execution layer
*** Create an API that works on "chunks" (which is similar to the "parts" API in ddR). Would like to add these APIs to ddR.
*** Indrajit: You should be able to get some of the chunk like features by using parts and dmapply. E.g., you can call dmapply to read 10 different files, which correspond 10 chunks now. These are however wrapped as a darray or dframe. But you can continue to work on the individual chunks by using parts(i).
=== 6/2/2016 ===
* Round table introduction
* (Michael) Goals for the group:
** Make a common abstraction/interfaces to make it easier to work with distributed data and R
** Unify the interface
** Working group will run for a year. Get an API defined, get at least one open source reference implementations
** not everyone needs to work hands on. We will create smaller groups to focus on those aspects.
** We tried to get a diverse group of participants
* Logistics: meet monthly, focus groups may meet more often
* R Consoritum may be able to figure ways to fund smaller projects that come out of the working group
* Michael Kane: Should we start with an inventory of what is available and people are using?
** Michael Lawrence: Yes, we should find the collection of tools as well as the use cases that are common.
** Joe: I will figure out a wiki space.
* Javier: Who are the end users? Simon: Common layer needed to get algorithms working. We started from algos and tried to find the minimal common api. One of the goals is to make sure everyone is on the same page and not trying to create his/her own custom interface.
* Javier: Should we try to get people with more algo expertise?
* Joe: Simon do you have a stack diagram?
* Simon: Can we get R Consortium to help write things up and draw things?
* Next meeting: Javier is going to present SparkR next time.
e75a4e3018cfffdaaa207761237224c2acd4dfa0
43
42
2016-10-18T11:29:29Z
MichaelLawrence
9
September minutes
wikitext
text/x-wiki
== Goals and Purpose ==
The Distributed Computing Working Group will endorse the design of a common abstraction for distributed data structures in R. We aim to have at least one open-source implementation, as well as a SQL implementation, released within a year of forming the group.
== Members ==
* '''Michael Lawrence''' (Genentech)
* '''Indrajit Roy''' (HP Enterprise)
* ''Joe Rickert'' (Microsoft)
* Bernd Bischl (LMU)
* Matt Dowle (H2O)
* Mario Inchiosa (Microsoft)
* Michael Kane (Yale)
* Javier Luraschi (RStudio)
* Edward Ma (HP Enterprise)
* Luke Tierney (University of Iowa)
* Simon Urbanek (AT&T)
== Minutes ==
=== 9/8/2016 ===
''Detailed notes were not taken for this meeting''
* Clark Fitzgerald: internship report
** Developed two packages for low-level Spark integration: rddlist, sparklite
** Patched a bug in Spark
** ddR needs refactoring before Spark integration is feasible:
*** dlist, dframe, and darray should be formal classes.
*** Partitions of data should be represented by a distributed list abstraction, and most functions (e.g., dmapply) should be implemented on top of that list.
* Javier: sparklyr update
** Preparing for CRAN release
** Mario: what happened to sparkapi?
*** Javier: sparkapi has been merged into sparklyr in order to avoid overhead of maintaining two packages. ddR can do everything it needs with sparklyr.
* Luke Tierney: Update on the low-level vector abstraction, which might support interfaces like ddR and sparklyr.
** Overall approach seems feasible, but still working out a few details.
** Will land in a branch soon.
* Bernd Bischl: update on the batchtools package
** Successor to BatchJobs based on in-memory database
=== 8/11/2016 ===
''Meeting was canceled due to lack of availability.''
=== 7/14/2016 ===
* Introduced Clark who is the intern funded by R Consortium. Clark is a graduate student from UC Davis. He will work on ddR integration with Spark and improving the core ddR API as well such as adding a distributed apply() for matrices, split function, etc.
* Bernd: Can I play around with ddR now? What backend should I use? How robust is the code?
** Clark: It's in good enough shape to be played around with. We will continue to improve it. Hopefully the spark integration will be done before the end of my internship in September.
* Q: Is anyone working on using ddR to make ML scale better.
** Indrajit: We have kmeans, glm, etc. already in CRAN.
** Michael Kane: We are working on glmnet and other packages related to algorithm development.
* Javier gave a demo of sparklyr and sparkapi.
** Motivation for the pacakage: The SparkR package overrides the dplyr interface. This is an issue for RStudio. SparkR is not a CRAN package which makes it difficult to add changes. dplyr is the most popular tool by RStudio and is broken on SparkR.
** Sparklyr provides a dplyr interface. It will also support ML like interfaces, such as consuming a ML model.
** Sparklyr does not currently support any distributed computing features. Instead we can recommend ddR as the distributed computing framework on top of sparkapi. We will put the code in CRAN in a couple of weeks.
** Simon: Can you talk more about the wrapper/low level API to work with Spark?
*** Javier: The package underneath the cover is called "sparkapi" it is to be used by pacakge builders. "spark_context()" and "invoke()" are the functionality to call scala methods. It does not you to currently run R user defined functions. I am currently working on enabling that feature. Depending upon the interest in using ddR with sparkapi, I can spend more time to make sparkapi feature rich.
** Indrajit: What versions of Spark are supported
*** Javier: Anything after 1.6
** Bernd: How do you export data?
*** Javier: We are using all the code from SparkR. So everything in SparkR should continue to work. We don't need to change SparkR. We just need to maintain compatibility.
** Bernd: What happens when the RDDs are very large?
*** Javier: Spark will spill on disk.
* Michael Kane: Presented examples that he implemented on ddR.
** Talked about how the different distributed packages compare to each other in terms of functionality.
** Michael K. looked at glm and truncated SVD on ddR. Was able to implement irls on ddR by implementing two distributed functions such as "cross". In truncated SVD only needed to overload two different distributed multiplications.
** Ran these algorithms on the 1000 genome dataset.
** Overall liked ddR since it was easy to implement the algorithms in the package.
** New ideas:
*** Trying to separate the data layer from the execution layer
*** Create an API that works on "chunks" (which is similar to the "parts" API in ddR). Would like to add these APIs to ddR.
*** Indrajit: You should be able to get some of the chunk like features by using parts and dmapply. E.g., you can call dmapply to read 10 different files, which correspond 10 chunks now. These are however wrapped as a darray or dframe. But you can continue to work on the individual chunks by using parts(i).
=== 6/2/2016 ===
* Round table introduction
* (Michael) Goals for the group:
** Make a common abstraction/interfaces to make it easier to work with distributed data and R
** Unify the interface
** Working group will run for a year. Get an API defined, get at least one open source reference implementations
** not everyone needs to work hands on. We will create smaller groups to focus on those aspects.
** We tried to get a diverse group of participants
* Logistics: meet monthly, focus groups may meet more often
* R Consoritum may be able to figure ways to fund smaller projects that come out of the working group
* Michael Kane: Should we start with an inventory of what is available and people are using?
** Michael Lawrence: Yes, we should find the collection of tools as well as the use cases that are common.
** Joe: I will figure out a wiki space.
* Javier: Who are the end users? Simon: Common layer needed to get algorithms working. We started from algos and tried to find the minimal common api. One of the goals is to make sure everyone is on the same page and not trying to create his/her own custom interface.
* Javier: Should we try to get people with more algo expertise?
* Joe: Simon do you have a stack diagram?
* Simon: Can we get R Consortium to help write things up and draw things?
* Next meeting: Javier is going to present SparkR next time.
c23b1adf726b3deb911daffdd3961b67022d7531
42
37
2016-10-18T11:11:44Z
MichaelLawrence
9
add July minutes
wikitext
text/x-wiki
== Goals and Purpose ==
The Distributed Computing Working Group will endorse the design of a common abstraction for distributed data structures in R. We aim to have at least one open-source implementation, as well as a SQL implementation, released within a year of forming the group.
== Members ==
* '''Michael Lawrence''' (Genentech)
* '''Indrajit Roy''' (HP Enterprise)
* ''Joe Rickert'' (Microsoft)
* Bernd Bischl (LMU)
* Matt Dowle (H2O)
* Mario Inchiosa (Microsoft)
* Michael Kane (Yale)
* Javier Luraschi (RStudio)
* Edward Ma (HP Enterprise)
* Luke Tierney (University of Iowa)
* Simon Urbanek (AT&T)
== Minutes ==
=== 7/14/2016 ===
* Introduced Clark who is the intern funded by R Consortium. Clark is a graduate student from UC Davis. He will work on ddR integration with Spark and improving the core ddR API as well such as adding a distributed apply() for matrices, split function, etc.
* Bernd: Can I play around with ddR now? What backend should I use? How robust is the code?
** Clark: It's in good enough shape to be played around with. We will continue to improve it. Hopefully the spark integration will be done before the end of my internship in September.
* Q: Is anyone working on using ddR to make ML scale better.
** Indrajit: We have kmeans, glm, etc. already in CRAN.
** Michael Kane: We are working on glmnet and other packages related to algorithm development.
* Javier gave a demo of sparklyr and sparkapi.
** Motivation for the pacakage: The SparkR package overrides the dplyr interface. This is an issue for RStudio. SparkR is not a CRAN package which makes it difficult to add changes. dplyr is the most popular tool by RStudio and is broken on SparkR.
** Sparklyr provides a dplyr interface. It will also support ML like interfaces, such as consuming a ML model.
** Sparklyr does not currently support any distributed computing features. Instead we can recommend ddR as the distributed computing framework on top of sparkapi. We will put the code in CRAN in a couple of weeks.
** Simon: Can you talk more about the wrapper/low level API to work with Spark?
*** Javier: The package underneath the cover is called "sparkapi" it is to be used by pacakge builders. "spark_context()" and "invoke()" are the functionality to call scala methods. It does not you to currently run R user defined functions. I am currently working on enabling that feature. Depending upon the interest in using ddR with sparkapi, I can spend more time to make sparkapi feature rich.
** Indrajit: What versions of Spark are supported
*** Javier: Anything after 1.6
** Bernd: How do you export data?
*** Javier: We are using all the code from SparkR. So everything in SparkR should continue to work. We don't need to change SparkR. We just need to maintain compatibility.
** Bernd: What happens when the RDDs are very large?
*** Javier: Spark will spill on disk.
* Michael Kane: Presented examples that he implemented on ddR.
** Talked about how the different distributed packages compare to each other in terms of functionality.
** Michael K. looked at glm and truncated SVD on ddR. Was able to implement irls on ddR by implementing two distributed functions such as "cross". In truncated SVD only needed to overload two different distributed multiplications.
** Ran these algorithms on the 1000 genome dataset.
** Overall liked ddR since it was easy to implement the algorithms in the package.
** New ideas:
*** Trying to separate the data layer from the execution layer
*** Create an API that works on "chunks" (which is similar to the "parts" API in ddR). Would like to add these APIs to ddR.
*** Indrajit: You should be able to get some of the chunk like features by using parts and dmapply. E.g., you can call dmapply to read 10 different files, which correspond 10 chunks now. These are however wrapped as a darray or dframe. But you can continue to work on the individual chunks by using parts(i).
=== 6/2/2016 ===
* Round table introduction
* (Michael) Goals for the group:
** Make a common abstraction/interfaces to make it easier to work with distributed data and R
** Unify the interface
** Working group will run for a year. Get an API defined, get at least one open source reference implementations
** not everyone needs to work hands on. We will create smaller groups to focus on those aspects.
** We tried to get a diverse group of participants
* Logistics: meet monthly, focus groups may meet more often
* R Consoritum may be able to figure ways to fund smaller projects that come out of the working group
* Michael Kane: Should we start with an inventory of what is available and people are using?
** Michael Lawrence: Yes, we should find the collection of tools as well as the use cases that are common.
** Joe: I will figure out a wiki space.
* Javier: Who are the end users? Simon: Common layer needed to get algorithms working. We started from algos and tried to find the minimal common api. One of the goals is to make sure everyone is on the same page and not trying to create his/her own custom interface.
* Javier: Should we try to get people with more algo expertise?
* Joe: Simon do you have a stack diagram?
* Simon: Can we get R Consortium to help write things up and draw things?
* Next meeting: Javier is going to present SparkR next time.
1bb85eab7653337c88d58dea328a48eddba9a454
37
31
2016-07-14T12:39:43Z
MichaelLawrence
9
wikitext
text/x-wiki
== Goals and Purpose ==
The Distributed Computing Working Group will endorse the design of a common abstraction for distributed data structures in R. We aim to have at least one open-source implementation, as well as a SQL implementation, released within a year of forming the group.
== Members ==
* '''Michael Lawrence''' (Genentech)
* '''Indrajit Roy''' (HP Enterprise)
* ''Joe Rickert'' (Microsoft)
* Bernd Bischl (LMU)
* Matt Dowle (H2O)
* Mario Inchiosa (Microsoft)
* Michael Kane (Yale)
* Javier Luraschi (RStudio)
* Edward Ma (HP Enterprise)
* Luke Tierney (University of Iowa)
* Simon Urbanek (AT&T)
== Minutes ==
=== 6/2/2016 ===
* Round table introduction
* (Michael) Goals for the group:
** Make a common abstraction/interfaces to make it easier to work with distributed data and R
** Unify the interface
** Working group will run for a year. Get an API defined, get at least one open source reference implementations
** not everyone needs to work hands on. We will create smaller groups to focus on those aspects.
** We tried to get a diverse group of participants
* Logistics: meet monthly, focus groups may meet more often
* R Consoritum may be able to figure ways to fund smaller projects that come out of the working group
* Michael Kane: Should we start with an inventory of what is available and people are using?
** Michael Lawrence: Yes, we should find the collection of tools as well as the use cases that are common.
** Joe: I will figure out a wiki space.
* Javier: Who are the end users? Simon: Common layer needed to get algorithms working. We started from algos and tried to find the minimal common api. One of the goals is to make sure everyone is on the same page and not trying to create his/her own custom interface.
* Javier: Should we try to get people with more algo expertise?
* Joe: Simon do you have a stack diagram?
* Simon: Can we get R Consortium to help write things up and draw things?
* Next meeting: Javier is going to present SparkR next time.
b3128649b3de00811e1cd9123a658da998b0c052
31
30
2016-06-24T15:44:58Z
MichaelLawrence
9
wikitext
text/x-wiki
== Goals and Purpose ==
The Distributed Computing Working Group will endorse the design of a common abstraction for distributed data structures in R. We aim to have at least one open-source implementation, as well as a SQL implementation, released within a year of forming the group.
== Members ==
* '''Michael Lawrence''' (Genentech)
* '''Indrajit Roy''' (HP Enterprise)
* ''Joe Rickert'' (Microsoft)
* Bernd Bischl (LMU)
* Matt Dowle (H2O)
* Mario Inchiosa (Microsoft)
* Michael Kane (Yale)
* Javier Luraschi (RStudio)
* Edward Ma (HP Enterprise)
* Luke Tierney (University of Iowa)
* Simon Urbanek (AT&T)
== Minutes ==
* Round table introduction
* (Michael) Goals for the group:
** Make a common abstraction/interfaces to make it easier to work with distributed data and R
** Unify the interface
** Working group will run for a year. Get an API defined, get at least one open source reference implementations
** not everyone needs to work hands on. We will create smaller groups to focus on those aspects.
** We tried to get a diverse group of participants
* Logistics: meet monthly, focus groups may meet more often
* R Consoritum may be able to figure ways to fund smaller projects that come out of the working group
* Michael Kane: Should we start with an inventory of what is available and people are using?
** Michael Lawrence: Yes, we should find the collection of tools as well as the use cases that are common.
** Joe: I will figure out a wiki space.
* Javier: Who are the end users? Simon: Common layer needed to get algorithms working. We started from algos and tried to find the minimal common api. One of the goals is to make sure everyone is on the same page and not trying to create his/her own custom interface.
* Javier: Should we try to get people with more algo expertise?
* Joe: Simon do you have a stack diagram?
* Simon: Can we get R Consortium to help write things up and draw things?
* Next meeting: Javier is going to present SparkR next time.
bf7b25e31a3f2275af974298083a622944caebf3
30
29
2016-06-24T15:43:59Z
MichaelLawrence
9
wikitext
text/x-wiki
== Goals and Purpose ==
The Distributed Computing Working Group will endorse the design of a common abstraction for distributed data structures in R. We aim to have at least one open-source implementation, as well as a SQL implementation, released within a year of forming the group.
== Members ==
* '''Michael Lawrence''' (Genentech)
* '''Indrajit Roy''' (HP Enterprise)
* ''Joe Rickert'' (Microsoft)
* Bernd Bischl (LMU)
* Matt Dowle (H2O)
* Mario Inchiosa (Microsoft)
* Michael Kane (Yale)
* Javier Luraschi (RStudio)
* Edward Ma (HP Enterprise)
* Luke Tierney (University of Iowa)
* Simon Urbanek (AT&T)
== Minutes ==
* Round table introduction
* (Michael) Goals for the group:
* Make a common abstraction/interfaces to make it easier to work with distributed data and R
* Unify the interface
* Working group will run for a year. Get an API defined, get at least one open source reference implementations
* not everyone needs to work hands on. We will create smaller groups to focus on those aspects.
* We tried to get a diverse group of participants
* Logistics: meet monthly, focus groups may meet more often
* R Consoritum may be able to figure ways to fund smaller projects that come out of the working group
* Michael Kane: Should we start with an inventory of what is available and people are using?
* Michael Lawrence: Yes, we should find the collection of tools as well as the use cases that are common.
* Joe: I will figure out a wiki space.
* javier: Who are the end users? Simon: Common layer needed to get algorithms working. We started from algos and tried to find the minimal common api. One of the goals is to make sure everyone is on the same page and not trying to create his/her own custom interface.
* Javier: Should we try to get people with more algo expertise?
* Joe: Simon do you have a stack diagram?
* Simon: Can we get R Consortium to help write things up and draw things?
* Next meeting: Javier is going to present SparkR next time.
7215e495df16c72053fd93bb9f238767d59067c6
29
28
2016-06-24T15:34:25Z
MichaelLawrence
9
wikitext
text/x-wiki
Goals and Purpose
The Distributed Computing Working Group will endorse the design of a common abstraction for distributed data structures in R. We aim to have at least one open-source implementation, as well as a SQL implementation, released within a year of forming the group.
Members
* '''Michael Lawrence''' (Genentech)
* '''Indrajit Roy''' (HP Enterprise)
* ''Joe Rickert'' (Microsoft)
* Bernd Bischl (LMU)
* Matt Dowle (H2O)
* Mario Inchiosa (Microsoft)
* Michael Kane (Yale)
* Javier Luraschi (RStudio)
* Edward Ma (HP Enterprise)
* Luke Tierney (University of Iowa)
* Simon Urbanek (AT&T)
Minutes
f0e7c7e7ffbe41a37b1e96736e910acb86c75653
28
27
2016-06-24T15:34:06Z
MichaelLawrence
9
wikitext
text/x-wiki
Goals and Purpose
The Distributed Computing Working Group will endorse the design of a common abstraction for distributed data structures in R. We aim to have at least one open-source implementation, as well as a SQL implementation, released within a year of forming the group.
Members
* '''Michael Lawrence''' (Genentech)
* ''''Indrajit Roy'''' (HP Enterprise)
* ''Joe Rickert'' (Microsoft)
* Bernd Bischl (LMU)
* Matt Dowle (H2O)
* Mario Inchiosa (Microsoft)
* Michael Kane (Yale)
* Javier Luraschi (RStudio)
* Edward Ma (HP Enterprise)
* Luke Tierney (University of Iowa)
* Simon Urbanek (AT&T)
Minutes
ac38889114e3c83c6f95927ec76f18a2e3031afa
27
26
2016-06-24T15:33:49Z
MichaelLawrence
9
wikitext
text/x-wiki
Goals and Purpose
The Distributed Computing Working Group will endorse the design of a common abstraction for distributed data structures in R. We aim to have at least one open-source implementation, as well as a SQL implementation, released within a year of forming the group.
Members
* '''Michael Lawrence''' (Genentech)
* ""Indrajit Roy"" (HP Enterprise)
* ''Joe Rickert'' (Microsoft)
* Bernd Bischl (LMU)
* Matt Dowle (H2O)
* Mario Inchiosa (Microsoft)
* Michael Kane (Yale)
* Javier Luraschi (RStudio)
* Edward Ma (HP Enterprise)
* Luke Tierney (University of Iowa)
* Simon Urbanek (AT&T)
Minutes
0e4c28f4d3b9f5d438e9dd1001f5bb822d0e453e
26
2016-06-24T15:32:47Z
MichaelLawrence
9
Created page with "Goals and Purpose The Distributed Computing Working Group will endorse the design of a common abstraction for distributed data structures in R. We aim to have at least one op..."
wikitext
text/x-wiki
Goals and Purpose
The Distributed Computing Working Group will endorse the design of a common abstraction for distributed data structures in R. We aim to have at least one open-source implementation, as well as a SQL implementation, released within a year of forming the group.
Members
'''Michael Lawrence''' (Genentech)
""Indrajit Roy"" (HP Enterprise)
''Joe Rickert'' (Microsoft)
Bernd Bischl ()
Matt Dowle (H2O)
Mario Inchiosa (Microsoft)
Michael Kane (Yale)
Javier Luraschi (RStudio)
Edward Ma (HP Enterprise)
Luke Tierney (University of Iowa)
Simon Urbanek (AT&T)
Minutes
7f8541705d0d5ef1f40cc3072a7084f5c4cc77fd
Distributed Computing Working Group Progress Report 2016
0
20
76
75
2017-05-01T18:45:03Z
MichaelLawrence
9
wikitext
text/x-wiki
Authors: Michael Lawrence and Indrajit Roy
== Introduction ==
Data sizes continue to increase, while single core performance has
stagnated. We scale computations by leveraging multiple cores and
machines. Large datasets are expensive to replicate, so we minimize
data movement by moving the computation to the data. Many systems,
such as Hadoop, Spark, and massively parallel processing (MPP)
databases, have emerged to support these strategies, and each exposes
its own unique interface, with little standardization.
Developing and executing an algorithm in the distributed context is a
complex task that requires specific knowledge of and dependency on the
system storing the data. It is also a task orthogonal to the primary
role of a data scientist or statistician: extracting knowledge from
data. The task thus falls to the data analysis environment, which
should mask the complexity behind a familiar interface, maintaining
user productivity. However, it is not always feasible to automatically
determine the optimal strategy for a given problem, so user input is
often beneficial. The environment should only abstract the details to
the extent deemed appropriate by the user.
R needs a standardized, layered and idiomatic abstraction for
computing on distributed data structures. R has many packages that
provide parallelism constructs as well as bridges to distributed
systems such as Hadoop. Unfortunately, each interface has its own
syntax, parallelism techniques, and supported platform(s). As a
consequence, contributors are forced to learn multiple idiosyncratic
interfaces, and to restrict each implementation to a particular
interface, thus limiting the applicability and adoption of their
software and hampering interoperability.
The idea of a unified interface stemmed from a cross-industry workshop
organized at HP Labs in early 2015. The workshop was attended by
different companies, universities, and R-core members. Immediately
after the workshop, Indrajit Roy, Edward Ma, and Michael Lawrence began
designing an abstraction that later became known as the CRAN package
ddR (Distributed Data in R)[1]. It declares a unified API for distributed
computing in R and ensures that R programs written using the API are
portable across different systems, such as Distributed R, Spark, etc.
The ddR package has completed its initial phase of development; the
first release is now on CRAN. Three ddR machine-learning algorithms
are also on CRAN, randomForest.ddR, glm.ddR, and kmeans.ddR. Two
reference backends for ddR have been completed, one for R’s parallel
package, and one for HP Distributed R. Example code and scripts to run
algorithms and code on both of these backends are available in our
public repository at https://github.com/vertica/ddR.
The overarching goal of the ddR project was for it to be a starting
point in a collaborative effort, ultimately leading to a standard API
for working with distributed data in R. We decided that it was
natural for the R Consortium to sponsor the collaboration, as it
should involve both industry and R-core members. To this end, we
established the R Consortium Working Group on Distributed Computing,
with a planned duration of a single year and the following aims:
# Agree on the goal of the group, i.e., we should have a unifying framework for distributed computing. Define success metric.
# Brainstorm on what primitives should be included in the API. We can use ddR’s API of distributed data-structures and dmapply as the starting proposal. Understand relationship with existing packages such as parallel, foreach, etc.
# Explore how ddR like interface will interact with databases. Are there connections or redundancies with dplyr and multiplyr?
# Decide on a reference implementation for the API.
# Decide on whether we should also implement a few ecosystem packages, e.g., distributed algorithms written using the API.
We declared the following milestones:
# Mid-year milestone: Finalize API. Decide who all will help with developing the top-level implementation and backends.
# End-year milestone: Summary report and reference implementation. Socialize the final package.
This report outlines the progress we have made on the above goals and
milestones, and how we plan to continue progress in the second half of
the working group term.
== Results and Current Status ==
The working group has achieved the first goal by agreeing that we
should aim for a unifying distributed computing abstraction, and we
have treated ddR as an informal API proposal.
We have discussed many of the issues related to the second goal,
deciding which primitives should be part of the API. We aim for the
API to support three shapes of data --- lists, arrays and data frames
--- and to enable the loading and basic manipulation of distributed
data, including multiple modes of functional iteration (e.g., apply()
operations). We aim to preserve consistency with base R data
structures and functions, so as to provide a simple path for users to
port computations to distributed systems.
The ddR constructs permit a user to express a wide variety of
applications, including machine-learning algorithms, that will run on
different backends. We have successfully implemented distributed
versions of algorithms such as K-means, Regression, Random Forest, and
PageRank using the ddR API. Some of these ddR algorithms are now
available on CRAN. In addition, the package provides several generic
definitions of common operators (such as colSums) that can be invoked
on distributed objects residing in the supporting backends.
Each custom ddR backend is encapsulated in its own driver package. In
the conventional style of functional OOP, the driver registers methods
for generics declared by the backend API, such that ddR can dispatch
the backend-specific instructions by only calling the generics.
The working group explored potential new backends with the aim of
broadening the applicability of the ddR interface. We hosted
presentations from external speakers on Spark and TensorFlow, and also
considered a generic SQL backend. The discussion focused on Spark
integration, and the R Consortium-funded intern Clark Fitzgerald took
on the task of developing a prototype Spark backend. The development
of the Spark backend encountered some obstacles, including the
immaturity of Spark and its R interfaces. Development is currently
paused, as we await additional funding.
During the monthly meetings, the working group deliberated on
different design improvements for ddR itself. We list two key topics
that were discussed. First, Michael Kane and Bryan Lewis argued for a
lower level API that directly operates on chunks of data. While ddR
supports chunk-wise data processing, via a combination of dmapply()
and parts(), its focus on distributed data structures means that
the chunk-based processing is exposed as the manipulation of these
data structures. Second, Clark Fitzgerald proposed restructuring the
ddR code into two layers that includes chunk-wise processing while
retaining the emphasis on distributed data structures[2]. The lower
level API, which will interface with backends, will use a Map() like
primitive to evaluate functions on chunks of data, while the higher
level ddR API will expose distributed data structures, dmapply, and
other convenience functions. This refactoring would facilitate the
implementation of additional backends.
== Discussion and Future Plans ==
The R Consortium-funded working group and internship has helped us
start a conversation on distributed computing APIs for R. The ddR
CRAN package is a concrete outcome of this working group, and serves
as a platform for exploring APIs and their integration with different
backends. While ddR is still maturing, we have arrived at a consensus
for how we should improve and finalize the ddR API.
As part of our goal for a reference implementation, we aim to develop
one or more prototype backends that will make the ddR interface useful
in practice. A good candidate backend is any open-source system that
is effective at R use cases and has strong community support. Spark
remains a viable candidate, and we also aim to further explore
TensorFlow.
We plan for a second intern to perform three tasks: (1) refactor the
ddR API to a more final form, (2) compare Spark and TensorFlow in
detail, with an eye towards the feasibility of implementing a useful
backend, and (3) implement a prototype backend based on Spark or
TensorFlow, depending on the recommendation of the working group.
By the conclusion of the working group, it will have produced:
* A stable version of the ddR package and at least one practical backend, released on CRAN,
* A list of requirements that are relevant and of interest to the community but have not yet been met by ddR, including alternative implementations that remain independent,
* A list of topics that the group believes worthy of further investigation.
[1] http://h30507.www3.hp.com/t5/Behind-the-scenes-Labs/Enhancing-R-for-Distributed-Computing/ba-p/6795535#.VjE1K7erQQj
[2] Clark Fitzgerald. https://github.com/vertica/ddR/wiki/Design
dbf116b79c32c78d21cd69187da8e4097857e009
75
2017-05-01T18:43:10Z
MichaelLawrence
9
Created page with "== Introduction == Data sizes continue to increase, while single core performance has stagnated. We scale computations by leveraging multiple cores and machines. Large datas..."
wikitext
text/x-wiki
== Introduction ==
Data sizes continue to increase, while single core performance has
stagnated. We scale computations by leveraging multiple cores and
machines. Large datasets are expensive to replicate, so we minimize
data movement by moving the computation to the data. Many systems,
such as Hadoop, Spark, and massively parallel processing (MPP)
databases, have emerged to support these strategies, and each exposes
its own unique interface, with little standardization.
Developing and executing an algorithm in the distributed context is a
complex task that requires specific knowledge of and dependency on the
system storing the data. It is also a task orthogonal to the primary
role of a data scientist or statistician: extracting knowledge from
data. The task thus falls to the data analysis environment, which
should mask the complexity behind a familiar interface, maintaining
user productivity. However, it is not always feasible to automatically
determine the optimal strategy for a given problem, so user input is
often beneficial. The environment should only abstract the details to
the extent deemed appropriate by the user.
R needs a standardized, layered and idiomatic abstraction for
computing on distributed data structures. R has many packages that
provide parallelism constructs as well as bridges to distributed
systems such as Hadoop. Unfortunately, each interface has its own
syntax, parallelism techniques, and supported platform(s). As a
consequence, contributors are forced to learn multiple idiosyncratic
interfaces, and to restrict each implementation to a particular
interface, thus limiting the applicability and adoption of their
software and hampering interoperability.
The idea of a unified interface stemmed from a cross-industry workshop
organized at HP Labs in early 2015. The workshop was attended by
different companies, universities, and R-core members. Immediately
after the workshop, Indrajit Roy, Edward Ma, and Michael Lawrence began
designing an abstraction that later became known as the CRAN package
ddR (Distributed Data in R)[1]. It declares a unified API for distributed
computing in R and ensures that R programs written using the API are
portable across different systems, such as Distributed R, Spark, etc.
The ddR package has completed its initial phase of development; the
first release is now on CRAN. Three ddR machine-learning algorithms
are also on CRAN, randomForest.ddR, glm.ddR, and kmeans.ddR. Two
reference backends for ddR have been completed, one for R’s parallel
package, and one for HP Distributed R. Example code and scripts to run
algorithms and code on both of these backends are available in our
public repository at https://github.com/vertica/ddR.
The overarching goal of the ddR project was for it to be a starting
point in a collaborative effort, ultimately leading to a standard API
for working with distributed data in R. We decided that it was
natural for the R Consortium to sponsor the collaboration, as it
should involve both industry and R-core members. To this end, we
established the R Consortium Working Group on Distributed Computing,
with a planned duration of a single year and the following aims:
# Agree on the goal of the group, i.e., we should have a unifying framework for distributed computing. Define success metric.
# Brainstorm on what primitives should be included in the API. We can use ddR’s API of distributed data-structures and dmapply as the starting proposal. Understand relationship with existing packages such as parallel, foreach, etc.
# Explore how ddR like interface will interact with databases. Are there connections or redundancies with dplyr and multiplyr?
# Decide on a reference implementation for the API.
# Decide on whether we should also implement a few ecosystem packages, e.g., distributed algorithms written using the API.
We declared the following milestones:
# Mid-year milestone: Finalize API. Decide who all will help with developing the top-level implementation and backends.
# End-year milestone: Summary report and reference implementation. Socialize the final package.
This report outlines the progress we have made on the above goals and
milestones, and how we plan to continue progress in the second half of
the working group term.
== Results and Current Status ==
The working group has achieved the first goal by agreeing that we
should aim for a unifying distributed computing abstraction, and we
have treated ddR as an informal API proposal.
We have discussed many of the issues related to the second goal,
deciding which primitives should be part of the API. We aim for the
API to support three shapes of data --- lists, arrays and data frames
--- and to enable the loading and basic manipulation of distributed
data, including multiple modes of functional iteration (e.g., apply()
operations). We aim to preserve consistency with base R data
structures and functions, so as to provide a simple path for users to
port computations to distributed systems.
The ddR constructs permit a user to express a wide variety of
applications, including machine-learning algorithms, that will run on
different backends. We have successfully implemented distributed
versions of algorithms such as K-means, Regression, Random Forest, and
PageRank using the ddR API. Some of these ddR algorithms are now
available on CRAN. In addition, the package provides several generic
definitions of common operators (such as colSums) that can be invoked
on distributed objects residing in the supporting backends.
Each custom ddR backend is encapsulated in its own driver package. In
the conventional style of functional OOP, the driver registers methods
for generics declared by the backend API, such that ddR can dispatch
the backend-specific instructions by only calling the generics.
The working group explored potential new backends with the aim of
broadening the applicability of the ddR interface. We hosted
presentations from external speakers on Spark and TensorFlow, and also
considered a generic SQL backend. The discussion focused on Spark
integration, and the R Consortium-funded intern Clark Fitzgerald took
on the task of developing a prototype Spark backend. The development
of the Spark backend encountered some obstacles, including the
immaturity of Spark and its R interfaces. Development is currently
paused, as we await additional funding.
During the monthly meetings, the working group deliberated on
different design improvements for ddR itself. We list two key topics
that were discussed. First, Michael Kane and Bryan Lewis argued for a
lower level API that directly operates on chunks of data. While ddR
supports chunk-wise data processing, via a combination of dmapply()
and parts(), its focus on distributed data structures means that
the chunk-based processing is exposed as the manipulation of these
data structures. Second, Clark Fitzgerald proposed restructuring the
ddR code into two layers that includes chunk-wise processing while
retaining the emphasis on distributed data structures[2]. The lower
level API, which will interface with backends, will use a Map() like
primitive to evaluate functions on chunks of data, while the higher
level ddR API will expose distributed data structures, dmapply, and
other convenience functions. This refactoring would facilitate the
implementation of additional backends.
== Discussion and Future Plans ==
The R Consortium-funded working group and internship has helped us
start a conversation on distributed computing APIs for R. The ddR
CRAN package is a concrete outcome of this working group, and serves
as a platform for exploring APIs and their integration with different
backends. While ddR is still maturing, we have arrived at a consensus
for how we should improve and finalize the ddR API.
As part of our goal for a reference implementation, we aim to develop
one or more prototype backends that will make the ddR interface useful
in practice. A good candidate backend is any open-source system that
is effective at R use cases and has strong community support. Spark
remains a viable candidate, and we also aim to further explore
TensorFlow.
We plan for a second intern to perform three tasks: (1) refactor the
ddR API to a more final form, (2) compare Spark and TensorFlow in
detail, with an eye towards the feasibility of implementing a useful
backend, and (3) implement a prototype backend based on Spark or
TensorFlow, depending on the recommendation of the working group.
By the conclusion of the working group, it will have produced:
* A stable version of the ddR package and at least one practical backend, released on CRAN,
* A list of requirements that are relevant and of interest to the community but have not yet been met by ddR, including alternative implementations that remain independent,
* A list of topics that the group believes worthy of further investigation.
[1] http://h30507.www3.hp.com/t5/Behind-the-scenes-Labs/Enhancing-R-for-Distributed-Computing/ba-p/6795535#.VjE1K7erQQj
[2] Clark Fitzgerald. https://github.com/vertica/ddR/wiki/Design
811764c2232c8fd884213a53ed52481ab9e47f47
Initial Survey of API Usage
0
11
22
19
2016-06-20T15:27:23Z
Lukasstadler
8
wikitext
text/x-wiki
Using a small Java tool that drives the gcc preprocessor and runs some specialized regex queries yields the following reports on actual usage of the macros, typedefs, variables and functions defined in various R header files.
The usage counts were determine by scanning all CRAN source files (the CRAN dump is from mid-March 2016).
They may include false positives because this was done on a textual base only.
The packages "SOD" and "Boom" were ignored because they contain many false positives.
* [[Native API stats of R.h]]
* [[Native API stats of Rinternals.h without USE_R_INTERNALS]]
* [[Native API stats of Rinternals.h with USE_R_INTERNALS]]
* [[Native API stats of all header files]]
I am not a lawyer, but please be aware that this is derived from the include files and may be covered by the same licenses, e.g.:
<pre>
/*
* R : A Computer Language for Statistical Data Analysis
* Copyright (C) 1995, 1996 Robert Gentleman and Ross Ihaka
* Copyright (C) 1999-2015 The R Core Team.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU Lesser General Public License as published by
* the Free Software Foundation; either version 2.1 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public License
* along with this program; if not, a copy is available at
* https://www.R-project.org/Licenses/
*/
/*
* R : A Computer Language for Statistical Data Analysis
* Copyright (C) 2006-2016 The R Core Team.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, a copy is available at
* https://www.R-project.org/Licenses/
*/
</pre>
9caf0bd42520bcc4673da1d79c50314da522f684
19
2016-06-20T15:21:22Z
Lukasstadler
8
Created page with "Using a small Java tool that drives the gcc preprocessor and runs some specialized regex queries yields the following reports on actual usage of the macros, typedefs, variable..."
wikitext
text/x-wiki
Using a small Java tool that drives the gcc preprocessor and runs some specialized regex queries yields the following reports on actual usage of the macros, typedefs, variables and functions defined in various R header files:
The usage counts were determine by scanning all CRAN source files, they may include false positives because this was done on a textual base only.
The packages "SOD" and "Boom" were ignored because they contain many false positives.
* [[Native API stats of R.h]]
* [[Native API stats of Rinternals.h without USE_R_INTERNALS]]
* [[Native API stats of Rinternals.h with USE_R_INTERNALS]]
* [[Native API stats of all header files]]
I am not a lawyer, but please be aware that this is derived from the include files and may be covered by the same licenses, e.g.:
<pre>
/*
* R : A Computer Language for Statistical Data Analysis
* Copyright (C) 1995, 1996 Robert Gentleman and Ross Ihaka
* Copyright (C) 1999-2015 The R Core Team.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU Lesser General Public License as published by
* the Free Software Foundation; either version 2.1 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public License
* along with this program; if not, a copy is available at
* https://www.R-project.org/Licenses/
*/
/*
* R : A Computer Language for Statistical Data Analysis
* Copyright (C) 2006-2016 The R Core Team.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, a copy is available at
* https://www.R-project.org/Licenses/
*/
</pre>
4fc117224aa7abd7080ab7cbdc18cbff50adbc45
Main Page
0
1
82
81
2017-11-28T17:22:03Z
Jmertic
19
/* Get Involved */
wikitext
text/x-wiki
== Welcome to the R Consortium Wiki ==
The R Consortium, Inc. is a group organized under an open source governance and foundation model to provide support to the R community, the R Foundation and groups and individuals, using, maintaining and distributing R software.
This wiki space is for working groups and ISC Project collaboration and documentation.
== Get Involved ==
* [https://www.r-consortium.org/ R Consortium Website]
* [https://www.r-consortium.org/projects/call-for-proposals Looking to Submit a Proposal]
* [https://lists.r-consortium.org Sign up and follow along in mailing lists]
* [https://twitter.com/RConsortium Follow us on Twitter]
* [[R Consortium and the R Community Code of Conduct]]
* [[Top Level Projects]]
== Working Groups ==
* [[R Native API|Native APIs for R]]
* [[Distributed Computing Working Group|Distributed Computing]]
* [[Code Coverage Tool for R|Code Coverage for R]]
* [[R Certification|R Certification]]
* [[R in Medicine|R in Medicine]]
* [[R in Pharma|R in Pharma]]
'''''This wiki supports Linux Foundation ID single-sign-on and registration with the link at the top this page. Other components of the R Consortium that do not support single-sign-on will directly request your Linux Foundation ID username and password for login.'''''
Consult the [//meta.wikimedia.org/wiki/Help:Contents User's Guide] for information on using the wiki software.
68b9e9aa500728b37b8c617fd2d30c1c3a761b5b
81
58
2017-11-08T15:54:04Z
Jmertic
19
/* Working Groups */
wikitext
text/x-wiki
== Welcome to the R Consortium Wiki ==
The R Consortium, Inc. is a group organized under an open source governance and foundation model to provide support to the R community, the R Foundation and groups and individuals, using, maintaining and distributing R software.
This wiki space is for working groups and ISC Project collaboration and documentation.
== Get Involved ==
* [https://www.r-consortium.org/ R Consortium Website]
* [https://www.r-consortium.org/projects/call-for-proposals Looking to Submit a Proposal]
* [https://lists.r-consortium.org Sign up and follow along in mailing lists]
* [https://twitter.com/RConsortium Follow us on Twitter]
* [[R Consortium and the R Community Code of Conduct]]
== Working Groups ==
* [[R Native API|Native APIs for R]]
* [[Distributed Computing Working Group|Distributed Computing]]
* [[Code Coverage Tool for R|Code Coverage for R]]
* [[R Certification|R Certification]]
* [[R in Medicine|R in Medicine]]
* [[R in Pharma|R in Pharma]]
'''''This wiki supports Linux Foundation ID single-sign-on and registration with the link at the top this page. Other components of the R Consortium that do not support single-sign-on will directly request your Linux Foundation ID username and password for login.'''''
Consult the [//meta.wikimedia.org/wiki/Help:Contents User's Guide] for information on using the wiki software.
5d8cdc3ea434fb0a3a0e60faf6fb1b8d67e1ac33
58
50
2017-01-21T06:21:33Z
MeharPratapSingh
15
/* Working Groups */
wikitext
text/x-wiki
== Welcome to the R Consortium Wiki ==
The R Consortium, Inc. is a group organized under an open source governance and foundation model to provide support to the R community, the R Foundation and groups and individuals, using, maintaining and distributing R software.
This wiki space is for working groups and ISC Project collaboration and documentation.
== Get Involved ==
* [https://www.r-consortium.org/ R Consortium Website]
* [https://www.r-consortium.org/projects/call-for-proposals Looking to Submit a Proposal]
* [https://lists.r-consortium.org Sign up and follow along in mailing lists]
* [https://twitter.com/RConsortium Follow us on Twitter]
* [[R Consortium and the R Community Code of Conduct]]
== Working Groups ==
* [[R Native API|Native APIs for R]]
* [[Distributed Computing Working Group|Distributed Computing]]
* [[Code Coverage Tool for R|Code Coverage for R]]
* [[R Certification|R Certification]]
'''''This wiki supports Linux Foundation ID single-sign-on and registration with the link at the top this page. Other components of the R Consortium that do not support single-sign-on will directly request your Linux Foundation ID username and password for login.'''''
Consult the [//meta.wikimedia.org/wiki/Help:Contents User's Guide] for information on using the wiki software.
86b5ccf35e381ca48489fdda5da6a8b359111257
50
41
2016-11-03T20:39:10Z
Trishan
3
wikitext
text/x-wiki
== Welcome to the R Consortium Wiki ==
The R Consortium, Inc. is a group organized under an open source governance and foundation model to provide support to the R community, the R Foundation and groups and individuals, using, maintaining and distributing R software.
This wiki space is for working groups and ISC Project collaboration and documentation.
== Get Involved ==
* [https://www.r-consortium.org/ R Consortium Website]
* [https://www.r-consortium.org/projects/call-for-proposals Looking to Submit a Proposal]
* [https://lists.r-consortium.org Sign up and follow along in mailing lists]
* [https://twitter.com/RConsortium Follow us on Twitter]
* [[R Consortium and the R Community Code of Conduct]]
== Working Groups ==
* [[R Native API|Native APIs for R]]
* [[Distributed Computing Working Group|Distributed Computing]]
* [[Code Coverage Tool for R|Code Coverage for R]]
'''''This wiki supports Linux Foundation ID single-sign-on and registration with the link at the top this page. Other components of the R Consortium that do not support single-sign-on will directly request your Linux Foundation ID username and password for login.'''''
Consult the [//meta.wikimedia.org/wiki/Help:Contents User's Guide] for information on using the wiki software.
84d1949291913b37c2062439d59e60e10c33c019
41
39
2016-09-21T15:45:29Z
Benmarwick@gmail.com
11
/* Working Groups */ add code cov WG to list
wikitext
text/x-wiki
== Welcome to the R Consortium Wiki ==
The R Consortium, Inc. is a group organized under an open source governance and foundation model to provide support to the R community, the R Foundation and groups and individuals, using, maintaining and distributing R software.
This wiki space is for working groups and ISC Project collaboration and documentation.
== Get Involved ==
* [https://www.r-consortium.org/ R Consortium Website]
* [https://www.r-consortium.org/about/isc/proposals Looking to Submit a Proposal]
* [https://twitter.com/RConsortium Follow us on Twitter]
* [[R Consortium and the R Community Code of Conduct]]
== Working Groups ==
* [[R Native API|Native APIs for R]]
* [[Distributed Computing Working Group|Distributed Computing]]
* [[Code Coverage Tool for R|Code Coverage for R]]
'''''This wiki supports Linux Foundation ID single-sign-on and registration with the link at the top this page. Other components of the R Consortium that do not support single-sign-on will directly request your Linux Foundation ID username and password for login.'''''
Consult the [//meta.wikimedia.org/wiki/Help:Contents User's Guide] for information on using the wiki software.
d6669813fb12db5214a0b3798d79b75f8d87063b
39
38
2016-08-19T16:46:08Z
Trishan
3
wikitext
text/x-wiki
== Welcome to the R Consortium Wiki ==
The R Consortium, Inc. is a group organized under an open source governance and foundation model to provide support to the R community, the R Foundation and groups and individuals, using, maintaining and distributing R software.
This wiki space is for working groups and ISC Project collaboration and documentation.
== Get Involved ==
* [https://www.r-consortium.org/ R Consortium Website]
* [https://www.r-consortium.org/about/isc/proposals Looking to Submit a Proposal]
* [https://twitter.com/RConsortium Follow us on Twitter]
* [[R Consortium and the R Community Code of Conduct]]
== Working Groups ==
* [[R Native API|Native APIs for R]]
* [[Distributed Computing Working Group|Distributed Computing]]
'''''This wiki supports Linux Foundation ID single-sign-on and registration with the link at the top this page. Other components of the R Consortium that do not support single-sign-on will directly request your Linux Foundation ID username and password for login.'''''
Consult the [//meta.wikimedia.org/wiki/Help:Contents User's Guide] for information on using the wiki software.
1ebd0916c5e84563d4134b699e9f8343e6d6b479
38
25
2016-08-19T16:44:42Z
Trishan
3
/* Get Involved */
wikitext
text/x-wiki
== Welcome to the R Consortium Wiki ==
The R Consortium, Inc. is a group organized under an open source governance and foundation model to provide support to the R community, the R Foundation and groups and individuals, using, maintaining and distributing R software.
This wiki space is for working groups and ISC Project collaboration and documentation.
== Get Involved ==
* [https://www.r-consortium.org/ R Consortium Website]
* [https://www.r-consortium.org/about/isc/proposals Looking to Submit a Proposal]
* [https://twitter.com/RConsortium Follow us on Twitter]
* [R Consortium Code of Conduct]
== Working Groups ==
* [[R Native API|Native APIs for R]]
* [[Distributed Computing Working Group|Distributed Computing]]
'''''This wiki supports Linux Foundation ID single-sign-on and registration with the link at the top this page. Other components of the R Consortium that do not support single-sign-on will directly request your Linux Foundation ID username and password for login.'''''
Consult the [//meta.wikimedia.org/wiki/Help:Contents User's Guide] for information on using the wiki software.
f0fabd3426466809edb1e5a8d62382b6965ecdd0
25
24
2016-06-24T15:22:57Z
MichaelLawrence
9
wikitext
text/x-wiki
== Welcome to the R Consortium Wiki ==
The R Consortium, Inc. is a group organized under an open source governance and foundation model to provide support to the R community, the R Foundation and groups and individuals, using, maintaining and distributing R software.
This wiki space is for working groups and ISC Project collaboration and documentation.
== Get Involved ==
* [https://www.r-consortium.org/ R Consortium Website]
* [https://www.r-consortium.org/about/isc/proposals Looking to Submit a Proposal]
* [https://twitter.com/RConsortium Follow us on Twitter]
== Working Groups ==
* [[R Native API|Native APIs for R]]
* [[Distributed Computing Working Group|Distributed Computing]]
'''''This wiki supports Linux Foundation ID single-sign-on and registration with the link at the top this page. Other components of the R Consortium that do not support single-sign-on will directly request your Linux Foundation ID username and password for login.'''''
Consult the [//meta.wikimedia.org/wiki/Help:Contents User's Guide] for information on using the wiki software.
236a3b380deb9c0904a9267ca7aaddae74c0e7c6
24
11
2016-06-24T15:20:11Z
MichaelLawrence
9
wikitext
text/x-wiki
== Welcome to the R Consortium Wiki ==
The R Consortium, Inc. is a group organized under an open source governance and foundation model to provide support to the R community, the R Foundation and groups and individuals, using, maintaining and distributing R software.
This wiki space is for working groups and ISC Project collaboration and documentation.
== Get Involved ==
* [https://www.r-consortium.org/ R Consortium Website]
* [https://www.r-consortium.org/about/isc/proposals Looking to Submit a Proposal]
* [https://twitter.com/RConsortium Follow us on Twitter]
== Working Groups ==
* [[R Native API|Native APIs for R]]
* [[Distributed Computing]]
'''''This wiki supports Linux Foundation ID single-sign-on and registration with the link at the top this page. Other components of the R Consortium that do not support single-sign-on will directly request your Linux Foundation ID username and password for login.'''''
Consult the [//meta.wikimedia.org/wiki/Help:Contents User's Guide] for information on using the wiki software.
44b25f161ba53b7dbfcb987fcb798f706018794c
11
5
2016-05-20T16:18:24Z
Skaluzny
7
Created page for Future-proof Native APIs for R working group
wikitext
text/x-wiki
== Welcome to the R Consortium Wiki ==
The R Consortium, Inc. is a group organized under an open source governance and foundation model to provide support to the R community, the R Foundation and groups and individuals, using, maintaining and distributing R software.
This wiki space is for working groups and ISC Project collaboration and documentation.
== Get Involved ==
* [https://www.r-consortium.org/ R Consortium Website]
* [https://www.r-consortium.org/about/isc/proposals Looking to Submit a Proposal]
* [https://twitter.com/RConsortium Follow us on Twitter]
== Working Groups ==
* [[R Native API|Native APIs for R]]
'''''This wiki supports Linux Foundation ID single-sign-on and registration with the link at the top this page. Other components of the R Consortium that do not support single-sign-on will directly request your Linux Foundation ID username and password for login.'''''
Consult the [//meta.wikimedia.org/wiki/Help:Contents User's Guide] for information on using the wiki software.
a6afd9317417196480349ae68406c8c507d21aab
5
4
2016-03-08T17:41:04Z
Trishan
3
wikitext
text/x-wiki
== Welcome to the R Consortium Wiki ==
The R Consortium, Inc. is a group organized under an open source governance and foundation model to provide support to the R community, the R Foundation and groups and individuals, using, maintaining and distributing R software.
This wiki space is for working groups and ISC Project collaboration and documentation.
== Get Involved ==
* [https://www.r-consortium.org/ R Consortium Website]
* [https://www.r-consortium.org/about/isc/proposals Looking to Submit a Proposal]
* [https://twitter.com/RConsortium Follow us on Twitter]
'''''This wiki supports Linux Foundation ID single-sign-on and registration with the link at the top this page. Other components of the R Consortium that do not support single-sign-on will directly request your Linux Foundation ID username and password for login.'''''
Consult the [//meta.wikimedia.org/wiki/Help:Contents User's Guide] for information on using the wiki software.
5f91552aced856a95027e053b007254cb487ec72
4
3
2016-03-08T17:40:46Z
Trishan
3
wikitext
text/x-wiki
<strong>MediaWiki has been successfully installed.</strong>
== Welcome to the R Consortium Wiki ==
The R Consortium, Inc. is a group organized under an open source governance and foundation model to provide support to the R community, the R Foundation and groups and individuals, using, maintaining and distributing R software.
This wiki space is for working groups and ISC Project collaboration and documentation.
== Get Involved ==
* [https://www.r-consortium.org/ R Consortium Website]
* [https://www.r-consortium.org/about/isc/proposals Looking to Submit a Proposal]
* [https://twitter.com/RConsortium Follow us on Twitter]
'''''This wiki supports Linux Foundation ID single-sign-on and registration with the link at the top this page. Other components of the R Consortium that do not support single-sign-on will directly request your Linux Foundation ID username and password for login.'''''
Consult the [//meta.wikimedia.org/wiki/Help:Contents User's Guide] for information on using the wiki software.
0e3fb0ac2038d7de5a27e3452d4b0e8e7e981937
3
1
2016-03-08T17:40:16Z
Trishan
3
wikitext
text/x-wiki
<strong>MediaWiki has been successfully installed.</strong>
== Welcome to the R Consortium Wiki ==
The R Consortium, Inc. is a group organized under an open source governance and foundation model to provide support to the R community, the R Foundation and groups and individuals, using, maintaining and distributing R software.
This wiki space is for working groups and ISC Project collaboration and documentation.
== Get Involved ==
* [https://www.r-consortium.org/ R Consortium Website]
* [https://www.r-consortium.org/about/isc/proposals Looking to Submit a Proposal]
* [https://twitter.com/RConsortium Follow us on Twitter]
'''''This wiki supports Linux Foundation ID single-sign-on and registration with the link at the top this page. Other components of the Zephyr Project that do not support single-sign-on will directly request your Linux Foundation ID username and password for login.'''''
Consult the [//meta.wikimedia.org/wiki/Help:Contents User's Guide] for information on using the wiki software.
b9f82029c47078b7a04e2960d8e26e8a6054c1d5
1
2016-03-04T22:14:58Z
MediaWiki default
0
wikitext
text/x-wiki
<strong>MediaWiki has been successfully installed.</strong>
Consult the [//meta.wikimedia.org/wiki/Help:Contents User's Guide] for information on using the wiki software.
== Getting started ==
* [//www.mediawiki.org/wiki/Manual:Configuration_settings Configuration settings list]
* [//www.mediawiki.org/wiki/Manual:FAQ MediaWiki FAQ]
* [https://lists.wikimedia.org/mailman/listinfo/mediawiki-announce MediaWiki release mailing list]
* [//www.mediawiki.org/wiki/Localisation#Translation_resources Localise MediaWiki for your language]
48ae6e35148eb82b1c8ad85ab548f97ca752999a
Native API stats of R.h
0
7
15
2016-06-20T15:20:05Z
Lukasstadler
8
Created page with "== Input == <pre> #include "R.h" </pre> == Result == <pre> #define Calloc(n, t) (t *) R_chk_calloc( (size_t) (n), sizeof(t) ) // Calloc used 5657 times in 240 packages #defin..."
wikitext
text/x-wiki
== Input ==
<pre>
#include "R.h"
</pre>
== Result ==
<pre>
#define Calloc(n, t) (t *) R_chk_calloc( (size_t) (n), sizeof(t) ) // Calloc used 5657 times in 240 packages
#define CallocCharBuf(n) (char *) R_chk_calloc((size_t) ((n)+1), sizeof(char)) // CallocCharBuf used 3 times in cplexAPI, patchDVI
#define DOUBLE_DIGITS 53 // DOUBLE_DIGITS used 42 times in evd
#define DOUBLE_EPS 2.2204460492503131e-16 // DOUBLE_EPS used 180 times in 40 packages
#define DOUBLE_XMAX 1.7976931348623157e+308 // DOUBLE_XMAX used 63 times in 13 packages
#define DOUBLE_XMIN 2.2250738585072014e-308 // DOUBLE_XMIN used 13 times in unmarked, deSolve, ifultools, spatstat
#define ENABLE_NLS 1 // ENABLE_NLS used 80 times in 59 packages
#define ERROR <defined> // ERROR used 6406 times in 293 packages
#define F77_CALL(x) x_ // F77_CALL used 4269 times in 195 packages
#define F77_COM(x) x_ // F77_COM used 2 times in igraph
#define F77_COMDECL(x) x_ // F77_COMDECL used 2 times in igraph
#define F77_NAME(x) x_ // F77_NAME used 1913 times in 117 packages
#define F77_SUB(x) x_ // F77_SUB used 771 times in 89 packages
#define Free(p) (R_chk_free( (void *)(p) ), (p) = __null) // Free used 21329 times in 683 packages
#define HAVE_ALLOCA_H 1 // HAVE_ALLOCA_H used 15 times in treatSens, Matrix, TMB, pbdZMQ, ore, dbarts
#define HAVE_AQUA 1 // HAVE_AQUA used 13 times in 11 packages
#define HAVE_F77_UNDERSCORE 1 // HAVE_F77_UNDERSCORE used 2 times in igraph
#define IEEE_754 1 // IEEE_754 used 47 times in igraph, Rcpp, data.table, stringi
#define ISNA(x) R_IsNA(x) // ISNA used 649 times in 100 packages
#define ISNAN(x) R_isnancpp(x) // ISNAN used 1342 times in 146 packages
#define IndexWidth Rf_IndexWidth // IndexWidth unused
#define LOCAL_EVALUATOR // LOCAL_EVALUATOR used 11 times in rggobi, XML, ifultools, RGtk2
#define LibExport // LibExport used 2 times in hsmm
#define LibExtern extern // LibExtern used 4 times in rJava
#define LibImport // LibImport unused
#define MESSAGE <defined> // MESSAGE used 172 times in 33 packages
#define M_1_PI 0.318309886183790671537767526745028724 // M_1_PI used 42 times in SpatialExtremes, decon, mvabund, geoR, geoRglm, ExomeDepth, libamtrack, miRada, RandomFields, DescTools
#define M_2_PI 0.636619772367581343075535053490057448 // M_2_PI used 27 times in RandomFieldsUtils, dynaTree, ExomeDepth, RandomFields, svd, DescTools, spatstat
#define M_2_SQRTPI 1.12837916709551257389615890312154517 // M_2_SQRTPI used 6 times in excursions, PearsonDS, SpecsVerification, ExomeDepth
#define M_E 2.71828182845904523536028747135266250 // M_E used 40 times in Runuran, lamW, gmum.r, ExomeDepth, CEC, PoweR, TMB, Bmix, tgp, RcppShark
#define M_LN10 2.30258509299404568401799145468436421 // M_LN10 used 27 times in monomvn, rphast, secr, Runuran, rtfbs, PlayerRatings, ExomeDepth, spaMM, logistf, laGP
#define M_LN2 0.693147180559945309417232121458176568 // M_LN2 used 166 times in 30 packages
#define M_LOG10E 0.434294481903251827651128918916605082 // M_LOG10E used 2 times in ExomeDepth
#define M_LOG2E 1.44269504088896340735992468100189214 // M_LOG2E used 2 times in ExomeDepth
#define M_PI 3.14159265358979323846264338327950288 // M_PI used 1853 times in 207 packages
#define M_PI_2 1.57079632679489661923132169163975144 // M_PI_2 used 149 times in 28 packages
#define M_PI_4 0.785398163397448309615660845819875721 // M_PI_4 used 18 times in 12 packages
#define M_SQRT1_2 0.707106781186547524400844362104849039 // M_SQRT1_2 used 26 times in SpatialExtremes, gmwm, excursions, forecast, subrank, dplR, ExomeDepth, SpecsVerification
#define M_SQRT2 1.41421356237309504880168872420969808 // M_SQRT2 used 72 times in 23 packages
#define Memcpy(p,q,n) memcpy( p, q, (size_t)(n) * sizeof(*p) ) // Memcpy used 483 times in 32 packages
#define Memzero(p,n) memset(p, 0, (size_t)(n) * sizeof(*p)) // Memzero used 5 times in Matrix
#define NA_INTEGER R_NaInt // NA_INTEGER used 1520 times in 183 packages
#define NA_LOGICAL R_NaInt // NA_LOGICAL used 355 times in 73 packages
#define NA_REAL R_NaReal // NA_REAL used 1667 times in 226 packages
#define NORET __attribute__((noreturn)) // NORET unused
#define NULL_ENTRY // NULL_ENTRY used 170 times in 12 packages
#define PI 3.14159265358979323846264338327950288 // PI unused
#define PROBLEM <defined> // PROBLEM used 861 times in 78 packages
#define RECOVER <defined> // RECOVER used 170 times in 14 packages
#define R_ARITH_H_ // R_ARITH_H_ unused
#define R_COMPLEX_H // R_COMPLEX_H used 1 times in uniqueAtomMat
#define R_Calloc(n, t) (t *) R_chk_calloc( (size_t) (n), sizeof(t) ) // R_Calloc used 81 times in clpAPI, cplexAPI, poppr, rLindo, glpkAPI
#define R_ERROR_H_ // R_ERROR_H_ unused
#define R_EXT_BOOLEAN_H_ // R_EXT_BOOLEAN_H_ used 2 times in jpeg, Rcpp11
#define R_EXT_CONSTANTS_H_ // R_EXT_CONSTANTS_H_ unused
#define R_EXT_MEMORY_H_ // R_EXT_MEMORY_H_ unused
#define R_EXT_PRINT_H_ // R_EXT_PRINT_H_ used 6 times in spTDyn, spTimer
#define R_EXT_UTILS_H_ // R_EXT_UTILS_H_ unused
#define R_FINITE(x) R_finite(x) // R_FINITE used 1387 times in 145 packages
#define R_Free(p) (R_chk_free( (void *)(p) ), (p) = __null) // R_Free used 78 times in clpAPI, cplexAPI, poppr, glpkAPI
#define R_INLINE inline // R_INLINE used 330 times in 34 packages
#define R_PROBLEM_BUFSIZE 4096 // R_PROBLEM_BUFSIZE unused
#define R_RANDOM_H // R_RANDOM_H unused
#define R_RCONFIG_H // R_RCONFIG_H unused
#define R_RS_H // R_RS_H unused
#define R_R_H // R_R_H used 9 times in TMB, uniqueAtomMat, DatABEL, GenABEL, VariABEL
#define R_Realloc(p,n,t) (t *) R_chk_realloc( (void *)(p), (size_t)((n) * sizeof(t)) ) // R_Realloc used 3 times in poppr, seqminer, gpuR
#define Realloc(p,n,t) (t *) R_chk_realloc( (void *)(p), (size_t)((n) * sizeof(t)) ) // Realloc used 244 times in 57 packages
#define SINGLE_BASE 2 // SINGLE_BASE unused
#define SINGLE_EPS 1.19209290e-7F // SINGLE_EPS unused
#define SINGLE_XMAX 3.40282347e+38F // SINGLE_XMAX used 4 times in mapproj
#define SINGLE_XMIN 1.17549435e-38F // SINGLE_XMIN unused
#define SINT_MAX 2147483647 // SINT_MAX used 4 times in robust, AnalyzeFMRI
#define SINT_MIN (-2147483647 -1) // SINT_MIN used 2 times in robust
#define SIZEOF_SIZE_T 8 // SIZEOF_SIZE_T used 1 times in PythonInR
#define SUPPORT_MBCS 1 // SUPPORT_MBCS used 1 times in bibtex
#define SUPPORT_UTF8 1 // SUPPORT_UTF8 used 3 times in tau, rindex, stringi
#define StringFalse Rf_StringFalse // StringFalse used 3 times in iotools
#define StringTrue Rf_StringTrue // StringTrue used 3 times in iotools
#define USING_R // USING_R used 238 times in 29 packages
#define WARN <defined> // WARN used 122 times in 20 packages
#define WARNING <defined> // WARNING used 957 times in 190 packages
#define cPsort Rf_cPsort // cPsort unused
#define error Rf_error // error used 63771 times in 1109 packages
#define iPsort Rf_iPsort // iPsort used 3 times in matrixStats, robustbase
#define isBlankString Rf_isBlankString // isBlankString used 1 times in iotools
#define rPsort Rf_rPsort // rPsort used 63 times in 15 packages
#define revsort Rf_revsort // revsort used 60 times in 20 packages
#define setIVector Rf_setIVector // setIVector unused
#define setRVector Rf_setRVector // setRVector used 3 times in RcppClassic, RcppClassicExamples
#define warning Rf_warning // warning used 7679 times in 434 packages
# /Users/ls/Source/git/fastr/com.oracle.truffle.r.native/include/R.h
typedef double Sfloat; // Sfloat used 440 times in AnalyzeFMRI, wavethresh, IGM.MEA, spatial, LS2W, robust, MASS, PBSmapping
typedef int Sint; // Sint used 2750 times in 48 packages
extern "C" {
void R_FlushConsole(void); // R_FlushConsole used 651 times in 78 packages
void R_ProcessEvents(void); // R_ProcessEvents used 275 times in 39 packages
}
# /Users/ls/Source/git/fastr/com.oracle.truffle.r.native/include/R_ext/Arith.h
extern "C" {
extern double R_NaN; // R_NaN used 469 times in 68 packages
extern double R_PosInf; // R_PosInf used 562 times in 112 packages
extern double R_NegInf; // R_NegInf used 699 times in 105 packages
extern double R_NaReal; // R_NaReal used 140 times in 34 packages
// NA_REAL used 1667 times in 226 packages
extern int R_NaInt; // R_NaInt used 58 times in 20 packages
// NA_INTEGER used 1520 times in 183 packages
// NA_LOGICAL used 355 times in 73 packages
int R_IsNA(double); // R_IsNA used 161 times in 40 packages
int R_IsNaN(double); // R_IsNaN used 75 times in 28 packages
int R_finite(double); // R_finite used 232 times in 44 packages
int R_isnancpp(double); // R_isnancpp used 8 times in igraph, PwrGSD
}
# /Users/ls/Source/git/fastr/com.oracle.truffle.r.native/include/R_ext/Boolean.h
extern "C" {
typedef enum { FALSE = 0, TRUE } Rboolean;
}
# /Users/ls/Source/git/fastr/com.oracle.truffle.r.native/include/R_ext/Complex.h
extern "C" {
typedef struct {
double r;
double i;
} Rcomplex; // Rcomplex used 893 times in 47 packages
}
# /Users/ls/Source/git/fastr/com.oracle.truffle.r.native/include/R_ext/Error.h
extern "C" {
void __attribute__((noreturn)) Rf_error(const char *, ...);
void __attribute__((noreturn)) UNIMPLEMENTED(const char *);
void __attribute__((noreturn)) WrongArgCount(const char *);
void Rf_warning(const char *, ...); // Rf_warning used 316 times in 66 packages
// warning used 7679 times in 434 packages
void R_ShowMessage(const char *s); // R_ShowMessage used 104 times in Rserve, rJava, HiPLARM
}
# /Users/ls/Source/git/fastr/com.oracle.truffle.r.native/include/R_ext/Memory.h
extern "C" {
void* vmaxget(void); // vmaxget used 279 times in 20 packages
void vmaxset(const void *); // vmaxset used 279 times in 20 packages
void R_gc(void); // R_gc used 6 times in TMB, excel.link, gmatrix, microbenchmark
int R_gc_running(); // R_gc_running unused
char* R_alloc(size_t, int); // R_alloc used 7787 times in 330 packages
long double *R_allocLD(size_t nelem);
char* S_alloc(long, int); // S_alloc used 540 times in 50 packages
char* S_realloc(char *, long, long, int); // S_realloc used 55 times in 11 packages
}
# /Users/ls/Source/git/fastr/com.oracle.truffle.r.native/include/R_ext/Print.h
extern "C" {
void Rprintf(const char *, ...); // Rprintf used 33813 times in 729 packages
void REprintf(const char *, ...); // REprintf used 2531 times in 135 packages
}
# /Users/ls/Source/git/fastr/com.oracle.truffle.r.native/include/R_ext/RS.h
extern "C" {
extern void *R_chk_calloc(size_t, size_t); // R_chk_calloc used 6 times in rpart, XML, itree, ifultools, mgcv
extern void *R_chk_realloc(void *, size_t); // R_chk_realloc used 5 times in seqminer, gpuR, ifultools, mgcv
extern void R_chk_free(void *); // R_chk_free used 2 times in mgcv
void call_R(char*, long, void**, char**, long*, char**, long, char**); // call_R used 2 times in PoweR
}
# /Users/ls/Source/git/fastr/com.oracle.truffle.r.native/include/R_ext/Random.h
extern "C" {
typedef enum {
WICHMANN_HILL,
MARSAGLIA_MULTICARRY,
SUPER_DUPER,
MERSENNE_TWISTER,
KNUTH_TAOCP,
USER_UNIF,
KNUTH_TAOCP2,
LECUYER_CMRG
} RNGtype; // RNGtype unused
typedef enum {
BUGGY_KINDERMAN_RAMAGE,
AHRENS_DIETER,
BOX_MULLER,
USER_NORM,
INVERSION,
KINDERMAN_RAMAGE
} N01type; // N01type unused
void GetRNGstate(void); // GetRNGstate used 1753 times in 434 packages
void PutRNGstate(void); // PutRNGstate used 1794 times in 427 packages
double unif_rand(void); // unif_rand used 2135 times in 327 packages
double norm_rand(void); // norm_rand used 408 times in 93 packages
double exp_rand(void); // exp_rand used 110 times in 25 packages
typedef unsigned int Int32;
double * user_unif_rand(void); // user_unif_rand used 10 times in randaes, rstream, rngwell19937, SuppDists, randtoolbox, rlecuyer, Rrdrand
void user_unif_init(Int32); // user_unif_init used 5 times in randaes, SuppDists, randtoolbox, rngwell19937
int * user_unif_nseed(void); // user_unif_nseed used 4 times in randaes, SuppDists, rngwell19937
int * user_unif_seedloc(void); // user_unif_seedloc used 4 times in randaes, SuppDists, rngwell19937
double * user_norm_rand(void); // user_norm_rand used 1 times in RcppZiggurat
}
# /Users/ls/Source/git/fastr/com.oracle.truffle.r.native/include/R_ext/Utils.h
extern "C" {
void R_isort(int*, int); // R_isort used 45 times in 18 packages
void R_rsort(double*, int); // R_rsort used 210 times in 29 packages
void R_csort(Rcomplex*, int); // R_csort unused
void rsort_with_index(double *, int *, int); // rsort_with_index used 77 times in 40 packages
void Rf_revsort(double*, int*, int); // Rf_revsort unused
// revsort used 60 times in 20 packages
void Rf_iPsort(int*, int, int); // Rf_iPsort unused
// iPsort used 3 times in matrixStats, robustbase
void Rf_rPsort(double*, int, int); // Rf_rPsort unused
// rPsort used 63 times in 15 packages
void Rf_cPsort(Rcomplex*, int, int); // Rf_cPsort unused
// cPsort unused
void R_qsort (double *v, size_t i, size_t j); // R_qsort used 10 times in extWeibQuant, pomp, robustbase, dplR, tclust, pcaPP
void R_qsort_I (double *v, int *II, int i, int j); // R_qsort_I used 33 times in 15 packages
void R_qsort_int (int *iv, size_t i, size_t j); // R_qsort_int unused
void R_qsort_int_I(int *iv, int *II, int i, int j); // R_qsort_int_I used 19 times in ff, matrixStats, arules, Rborist, slam, eco, bnlearn
const char *R_ExpandFileName(const char *); // R_ExpandFileName used 42 times in 20 packages
void Rf_setIVector(int*, int, int); // Rf_setIVector unused
// setIVector unused
void Rf_setRVector(double*, int, double); // Rf_setRVector unused
// setRVector used 3 times in RcppClassic, RcppClassicExamples
Rboolean Rf_StringFalse(const char *); // Rf_StringFalse unused
// StringFalse used 3 times in iotools
Rboolean Rf_StringTrue(const char *); // Rf_StringTrue unused
// StringTrue used 3 times in iotools
Rboolean Rf_isBlankString(const char *); // Rf_isBlankString unused
// isBlankString used 1 times in iotools
double R_atof(const char *str); // R_atof used 9 times in SSN, tree, foreign, iotools
double R_strtod(const char *c, char **end); // R_strtod used 4 times in ape, iotools
char *R_tmpnam(const char *prefix, const char *tempdir); // R_tmpnam used 2 times in geometry
char *R_tmpnam2(const char *prefix, const char *tempdir, const char *fileext); // R_tmpnam2 unused
void R_CheckUserInterrupt(void); // R_CheckUserInterrupt used 1487 times in 234 packages
void R_CheckStack(void); // R_CheckStack used 115 times in vcrpart, actuar, cplm, lme4, Matrix, GNE, randtoolbox, HiPLARM, rngWELL, pedigreemm
void R_CheckStack2(size_t); // R_CheckStack2 unused
int findInterval(double *xt, int n, double x, // findInterval used 11 times in BSquare, DNAprofiles, unfoldr, chebpol, pomp, eco, protViz, PBSmapping, spatstat
Rboolean rightmost_closed, Rboolean all_inside, int ilo,
int *mflag);
void find_interv_vec(double *xt, int *n, double *x, int *nx, // find_interv_vec unused
int *rightmost_closed, int *all_inside, int *indx);
void R_max_col(double *matrix, int *nr, int *nc, int *maxes, int *ties_meth); // R_max_col used 2 times in geostatsp, MNP
}
</pre>
== Stats ==
<pre>
0 1 2 3 4 5 6 7 8 9 10+
Macro: 15 3 6 3 2 1 2 0 0 1 36 (usage count)
(69) 15 11 4 2 5 2 1 1 1 0 27 (distinct package count)
Function: 8 2 4 4 3 2 2 0 1 1 32 (usage count)
(59) 8 7 5 3 4 1 1 2 0 1 27 (distinct package count)
Variable: 0 0 0 0 0 0 0 0 0 0 5 (usage count)
(5) 0 0 0 0 0 0 0 0 0 0 5 (distinct package count)
TypeDef: 2 0 0 0 0 0 0 0 0 0 3 (usage count)
(5) 2 0 0 0 0 0 0 0 1 0 2 (distinct package count)
Alias: 3 1 2 4 1 0 0 0 0 0 11 (usage count)
(22) 3 6 2 0 0 0 0 0 0 0 11 (distinct package count)
</pre>
(e.g., 5 functions were referenced by exactly 2 distinct packages, and 3 typedefs have 10 or more individual usages)
ed4b49a2a79a470fad9a630430b0c950967084cd
Native API stats of Rinternals.h with USE R INTERNALS
0
9
17
2016-06-20T15:20:13Z
Lukasstadler
8
Created page with "== Input == <pre> #define USE_RINTERNALS #include "Rinternals.h" </pre> == Result == <pre> #define ANYSXP 18 // ANYSXP used 14 time..."
wikitext
text/x-wiki
== Input ==
<pre>
#define USE_RINTERNALS
#include "Rinternals.h"
</pre>
== Result ==
<pre>
#define ANYSXP 18 // ANYSXP used 14 times in RPostgreSQL, Rcpp11, seqminer, Rcpp, pryr, rtkpp, rtkore, RGtk2
#define ATTRIB(x) ((x)->attrib) // ATTRIB used 83 times in 20 packages
#define BCODESXP 21 // BCODESXP used 15 times in rcppbugs, Rcpp11, seqminer, Rcpp, pryr, rtkpp, rtkore
#define BCODE_CODE(x) ((x)->u.listsxp.carval) // BCODE_CODE unused
#define BCODE_CONSTS(x) ((x)->u.listsxp.cdrval) // BCODE_CONSTS unused
#define BCODE_EXPR(x) ((x)->u.listsxp.tagval) // BCODE_EXPR unused
#define BODY(x) ((x)->u.closxp.body) // BODY used 48 times in 15 packages
#define BODY_EXPR(e) R_ClosureExpr(e) // BODY_EXPR unused
#define BUILTINSXP 8 // BUILTINSXP used 24 times in 11 packages
#define CAAR(e) ((((e)->u.listsxp.carval))->u.listsxp.carval) // CAAR unused
#define CAD4R(e) ((((((((((e)->u.listsxp.cdrval))->u.listsxp.cdrval))->u.listsxp.cdrval))->u.listsxp.cdrval))->u.listsxp.carval) // CAD4R used 14 times in earth, foreign, actuar
#define CADDDR(e) ((((((((e)->u.listsxp.cdrval))->u.listsxp.cdrval))->u.listsxp.cdrval))->u.listsxp.carval) // CADDDR used 21 times in RPostgreSQL, foreign, actuar, bibtex
#define CADDR(e) ((((((e)->u.listsxp.cdrval))->u.listsxp.cdrval))->u.listsxp.carval) // CADDR used 52 times in 11 packages
#define CADR(e) ((((e)->u.listsxp.cdrval))->u.listsxp.carval) // CADR used 104 times in 17 packages
#define CAR(e) ((e)->u.listsxp.carval) // CAR used 575 times in 63 packages
#define CDAR(e) ((((e)->u.listsxp.carval))->u.listsxp.cdrval) // CDAR unused
#define CDDDR(e) ((((((e)->u.listsxp.cdrval))->u.listsxp.cdrval))->u.listsxp.cdrval) // CDDDR unused
#define CDDR(e) ((((e)->u.listsxp.cdrval))->u.listsxp.cdrval) // CDDR used 52 times in Rlabkey, Rcpp11, dplyr, proxy, Rcpp, slam, tikzDevice, OpenCL, svd
#define CDR(e) ((e)->u.listsxp.cdrval) // CDR used 4523 times in 76 packages
#define CHAR(x) ((const char *) (((SEXPREC_ALIGN *) (x)) + 1)) // CHAR used 4405 times in 362 packages
#define CHARSXP 9 // CHARSXP used 106 times in 33 packages
#define CLOENV(x) ((x)->u.closxp.env) // CLOENV used 23 times in Rcpp11, covr, pomp, Rcpp, pryr, testthat, qtbase
#define CLOSXP 3 // CLOSXP used 83 times in 30 packages
#define COMPLEX(x) ((Rcomplex *) (((SEXPREC_ALIGN *) (x)) + 1)) // COMPLEX used 1697 times in 71 packages
#define CONS(a, b) Rf_cons((a), (b)) // CONS used 458 times in 30 packages
#define CPLXSXP 15 // CPLXSXP used 409 times in 49 packages
#define CreateTag Rf_CreateTag // CreateTag used 1 times in rgp
#define DATAPTR(x) (((SEXPREC_ALIGN *) (x)) + 1) // DATAPTR used 113 times in 11 packages
#define DDVAL(x) ((x)->sxpinfo.gp & 1) // DDVAL unused
#define DDVAL_MASK 1 // DDVAL_MASK unused
#define DECREMENT_REFCNT(x) do {} while(0) // DECREMENT_REFCNT unused
#define DISABLE_REFCNT(x) do {} while(0) // DISABLE_REFCNT unused
#define DOTSXP 17 // DOTSXP used 16 times in RPostgreSQL, PythonInR, Rcpp11, seqminer, Rcpp, pryr, rtkpp, spikeSlabGAM, rtkore
#define DropDims Rf_DropDims // DropDims unused
#define ENABLE_NLS 1 // ENABLE_NLS used 80 times in 59 packages
#define ENABLE_REFCNT(x) do {} while(0) // ENABLE_REFCNT unused
#define ENCLOS(x) ((x)->u.envsxp.enclos) // ENCLOS used 7 times in Rcpp, pryr, rJava, Rcpp11, RGtk2
#define ENVFLAGS(x) ((x)->sxpinfo.gp) // ENVFLAGS unused
#define ENVSXP 4 // ENVSXP used 63 times in 25 packages
#define EXPRSXP 20 // EXPRSXP used 84 times in 14 packages
#define EXTPTRSXP 22 // EXTPTRSXP used 386 times in 55 packages
#define EXTPTR_PROT(x) ((x)->u.listsxp.cdrval) // EXTPTR_PROT used 5 times in rJava, pryr
#define EXTPTR_PTR(x) ((x)->u.listsxp.carval) // EXTPTR_PTR used 428 times in 15 packages
#define EXTPTR_TAG(x) ((x)->u.listsxp.tagval) // EXTPTR_TAG used 9 times in excel.link, pryr, rJava, gsl
#define FORMALS(x) ((x)->u.closxp.formals) // FORMALS used 15 times in qtpaint, RSclient, PBSddesolve, Rserve, covr, pryr, rgp, testthat, RandomFields
#define FRAME(x) ((x)->u.envsxp.frame) // FRAME used 19 times in deTestSet, IRISSeismic, pryr, BayesBridge, datamap, BayesLogit
#define FREESXP 31 // FREESXP used 4 times in rtkpp, rtkore
#define FUNSXP 99 // FUNSXP used 6 times in dplyr, rtkpp, data.table, rtkore
#define GetArrayDimnames Rf_GetArrayDimnames // GetArrayDimnames unused
#define GetColNames Rf_GetColNames // GetColNames unused
#define GetMatrixDimnames Rf_GetMatrixDimnames // GetMatrixDimnames used 2 times in Kmisc, optmatch
#define GetOption Rf_GetOption // GetOption used 5 times in rgl, gmp, Cairo, RGtk2
#define GetOption1 Rf_GetOption1 // GetOption1 used 1 times in PCICt
#define GetOptionDigits Rf_GetOptionDigits // GetOptionDigits unused
#define GetOptionWidth Rf_GetOptionWidth // GetOptionWidth unused
#define GetRowNames Rf_GetRowNames // GetRowNames unused
#define HASHTAB(x) ((x)->u.envsxp.hashtab) // HASHTAB used 12 times in Rcpp, pryr, datamap, Rcpp11, qtbase
#define HAVE_ALLOCA_H 1 // HAVE_ALLOCA_H used 15 times in treatSens, Matrix, TMB, pbdZMQ, ore, dbarts
#define HAVE_AQUA 1 // HAVE_AQUA used 13 times in 11 packages
#define HAVE_F77_UNDERSCORE 1 // HAVE_F77_UNDERSCORE used 2 times in igraph
#define IEEE_754 1 // IEEE_754 used 47 times in igraph, Rcpp, data.table, stringi
#define INCREMENT_NAMED(x) do { SEXP __x__ = (x); if (((__x__)->sxpinfo.named) != 2) (((__x__)->sxpinfo.named)=(((__x__)->sxpinfo.named) + 1)); } while (0) // INCREMENT_NAMED unused
#define INCREMENT_REFCNT(x) do {} while(0) // INCREMENT_REFCNT unused
#define INLINE_PROTECT // INLINE_PROTECT unused
#define INTEGER(x) ((int *) (((SEXPREC_ALIGN *) (x)) + 1)) // INTEGER used 41659 times in 758 packages
#define INTERNAL(x) ((x)->u.symsxp.internal) // INTERNAL used 1014 times in 63 packages
#define INTSXP 13 // INTSXP used 6341 times in 471 packages
#define ISNA(x) R_IsNA(x) // ISNA used 649 times in 100 packages
#define ISNAN(x) R_isnancpp(x) // ISNAN used 1342 times in 146 packages
#define IS_GETTER_CALL(call) (((((call)->u.listsxp.cdrval))->u.listsxp.carval) == R_TmpvalSymbol) // IS_GETTER_CALL unused
#define IS_LONG_VEC(x) ((((VECSEXP) (x))->vecsxp.length) == -1) // IS_LONG_VEC used 1 times in RProtoBuf
#define IS_S4_OBJECT(x) ((x)->sxpinfo.gp & ((unsigned short)(1<<4))) // IS_S4_OBJECT used 23 times in Rmosek, Runuran, data.table, xts, Matrix, slam, zoo, HiPLARM, OpenMx, tau
#define IS_SCALAR(x, type) (((x)->sxpinfo.type) == (type) && (((VECSEXP) (x))->vecsxp.length) == 1) // IS_SCALAR unused
#define IS_SIMPLE_SCALAR(x, type) ((((x)->sxpinfo.type) == (type) && (((VECSEXP) (x))->vecsxp.length) == 1) && ((x)->attrib) == R_NilValue) // IS_SIMPLE_SCALAR unused
#define IndexWidth Rf_IndexWidth // IndexWidth unused
#define LANGSXP 6 // LANGSXP used 1276 times in 53 packages
#define LCONS(a, b) Rf_lcons((a), (b)) // LCONS used 212 times in 24 packages
#define LENGTH(x) (((((VECSEXP) (x))->vecsxp.length) == -1) ? R_BadLongVector(x, \"/var/folders/t8/1ry582nx6438y8pn6gk20f3c0000gn/T/preprocessor_test2233054488227688894.cpp\", 453) : (((VECSEXP) (x))->vecsxp.length)) // LENGTH used 5845 times in 356 packages
#define LEVELS(x) ((x)->sxpinfo.gp) // LEVELS used 18 times in rtdists, rPref, BsMD, data.table, stringi, dplyr, OBsMD, pbdZMQ, astrochron, RandomFields
#define LGLSXP 10 // LGLSXP used 1430 times in 166 packages
#define LISTSXP 2 // LISTSXP used 87 times in 21 packages
#define LISTVAL(x) ((x)->u.listsxp) // LISTVAL unused
#define LOGICAL(x) ((int *) (((SEXPREC_ALIGN *) (x)) + 1)) // LOGICAL used 4473 times in 288 packages
#define LONG_VECTOR_SUPPORT // LONG_VECTOR_SUPPORT used 56 times in stringdist, matrixStats, RApiSerialize, Rhpc, pbdMPI, Rcpp11, Matrix
#define LONG_VEC_LENGTH(x) ((R_long_vec_hdr_t *) (x))[-1].lv_length // LONG_VEC_LENGTH used 1 times in Rcpp11
#define LONG_VEC_TRUELENGTH(x) ((R_long_vec_hdr_t *) (x))[-1].lv_truelength // LONG_VEC_TRUELENGTH unused
#define LibExport // LibExport used 2 times in hsmm
#define LibExtern extern // LibExtern used 4 times in rJava
#define LibImport // LibImport unused
#define MARK(x) ((x)->sxpinfo.mark) // MARK used 251 times in 21 packages
#define MARK_NOT_MUTABLE(x) (((x)->sxpinfo.named)=(2)) // MARK_NOT_MUTABLE unused
#define MAX_NUM_SEXPTYPE (1<<5) // MAX_NUM_SEXPTYPE unused
#define MAYBE_REFERENCED(x) (! (((x)->sxpinfo.named) == 0)) // MAYBE_REFERENCED unused
#define MAYBE_SHARED(x) (((x)->sxpinfo.named) > 1) // MAYBE_SHARED unused
#define MISSING(x) ((x)->sxpinfo.gp & 15) // MISSING used 125 times in 25 packages
#define MISSING_MASK 15 // MISSING_MASK used 10 times in rJPSGCS
#define NAMED(x) ((x)->sxpinfo.named) // NAMED used 62 times in 22 packages
#define NAMEDMAX 2 // NAMEDMAX unused
#define NA_INTEGER R_NaInt // NA_INTEGER used 1520 times in 183 packages
#define NA_LOGICAL R_NaInt // NA_LOGICAL used 355 times in 73 packages
#define NA_REAL R_NaReal // NA_REAL used 1667 times in 226 packages
#define NA_STRING R_NaString // NA_STRING used 574 times in 90 packages
#define NEWSXP 30 // NEWSXP used 4 times in rtkpp, rtkore
#define NILSXP 0 // NILSXP used 169 times in 44 packages
#define NORET __attribute__((noreturn)) // NORET unused
#define NOT_SHARED(x) (! (((x)->sxpinfo.named) > 1)) // NOT_SHARED unused
#define NO_REFERENCES(x) (((x)->sxpinfo.named) == 0) // NO_REFERENCES unused
#define NonNullStringMatch Rf_NonNullStringMatch // NonNullStringMatch used 8 times in proxy, arules, arulesSequences, cba
#define OBJECT(x) ((x)->sxpinfo.obj) // OBJECT used 102 times in 28 packages
#define PREXPR(e) R_PromiseExpr(e) // PREXPR used 4 times in igraph, lazyeval
#define PRINTNAME(x) ((x)->u.symsxp.pname) // PRINTNAME used 92 times in 29 packages
#define PROMSXP 5 // PROMSXP used 43 times in 14 packages
#define PROTECT(s) Rf_protect(s) // PROTECT used 24686 times in 767 packages
#define PROTECT_WITH_INDEX(x,i) R_ProtectWithIndex(x,i) // PROTECT_WITH_INDEX used 91 times in 27 packages
#define PairToVectorList Rf_PairToVectorList // PairToVectorList used 7 times in cba, rcdd
#define PrintValue Rf_PrintValue // PrintValue used 119 times in 13 packages
#define RAW(x) ((Rbyte *) (((SEXPREC_ALIGN *) (x)) + 1)) // RAW used 880 times in 99 packages
#define RAWSXP 24 // RAWSXP used 587 times in 92 packages
#define RDEBUG(x) ((x)->sxpinfo.debug) // RDEBUG used 69 times in rmetasim
#define REAL(x) ((double *) (((SEXPREC_ALIGN *) (x)) + 1)) // REAL used 30947 times in 687 packages
#define REALSXP 14 // REALSXP used 10171 times in 573 packages
#define REFCNT(x) 0 // REFCNT unused
#define REFCNTMAX (4 - 1) // REFCNTMAX unused
#define REPROTECT(x,i) R_Reprotect(x,i) // REPROTECT used 130 times in 25 packages
#define RSTEP(x) ((x)->sxpinfo.spare) // RSTEP unused
#define RTRACE(x) ((x)->sxpinfo.trace) // RTRACE unused
#define R_ALLOCATOR_TYPE // R_ALLOCATOR_TYPE unused
#define R_ARITH_H_ // R_ARITH_H_ unused
#define R_COMPLEX_H // R_COMPLEX_H used 1 times in uniqueAtomMat
#define R_CheckStack() do { void __attribute__((noreturn)) R_SignalCStackOverflow(intptr_t); int dummy; intptr_t usage = R_CStackDir * (R_CStackStart - (uintptr_t)&dummy); if(R_CStackLimit != -1 && usage > ((intptr_t) R_CStackLimit)) R_SignalCStackOverflow(usage); } while (FALSE) // R_CheckStack used 115 times in vcrpart, actuar, cplm, lme4, Matrix, GNE, randtoolbox, HiPLARM, rngWELL, pedigreemm
#define R_ERROR_H_ // R_ERROR_H_ unused
#define R_EXT_BOOLEAN_H_ // R_EXT_BOOLEAN_H_ used 2 times in jpeg, Rcpp11
#define R_EXT_MEMORY_H_ // R_EXT_MEMORY_H_ unused
#define R_EXT_PRINT_H_ // R_EXT_PRINT_H_ used 6 times in spTDyn, spTimer
#define R_EXT_UTILS_H_ // R_EXT_UTILS_H_ unused
#define R_FINITE(x) R_finite(x) // R_FINITE used 1387 times in 145 packages
#define R_INLINE inline // R_INLINE used 330 times in 34 packages
#define R_INTERNALS_H_ // R_INTERNALS_H_ used 7 times in uniqueAtomMat, rtkpp, rtkore, spatstat
#define R_LEN_T_MAX 2147483647 // R_LEN_T_MAX used 4 times in stringdist, matrixStats, FREGAT, Rcpp11
#define R_LONG_VEC_TOKEN -1 // R_LONG_VEC_TOKEN used 1 times in Rcpp11
#define R_RCONFIG_H // R_RCONFIG_H unused
#define R_SHORT_LEN_MAX 2147483647 // R_SHORT_LEN_MAX used 1 times in pbdMPI
#define R_XDR_DOUBLE_SIZE 8 // R_XDR_DOUBLE_SIZE used 2 times in rgdal
#define R_XDR_INTEGER_SIZE 4 // R_XDR_INTEGER_SIZE used 3 times in rgdal
#define R_XLEN_T_MAX 4503599627370496 // R_XLEN_T_MAX used 7 times in stringdist, Matrix, matrixStats, RApiSerialize, Rhpc
#define S3Class Rf_S3Class // S3Class used 4 times in RInside, littler
#define S4SXP 25 // S4SXP used 71 times in 15 packages
#define S4_OBJECT_MASK ((unsigned short)(1<<4)) // S4_OBJECT_MASK unused
#define SETLENGTH(x,v) do { SEXP sl__x__ = (x); R_xlen_t sl__v__ = (v); if (((((VECSEXP) (sl__x__))->vecsxp.length) == -1)) (((R_long_vec_hdr_t *) (sl__x__))[-1].lv_length = (sl__v__)); else ((((VECSEXP) (sl__x__))->vecsxp.length) = ((R_len_t) sl__v__)); } while (0) // SETLENGTH used 65 times in 11 packages
#define SETLEVELS(x,v) (((x)->sxpinfo.gp)=((unsigned short)v)) // SETLEVELS used 2 times in Rcpp11
#define SET_DDVAL(x,v) ((v) ? (((x)->sxpinfo.gp) |= 1) : (((x)->sxpinfo.gp) &= ~1)) // SET_DDVAL unused
#define SET_DDVAL_BIT(x) (((x)->sxpinfo.gp) |= 1) // SET_DDVAL_BIT unused
#define SET_ENVFLAGS(x,v) (((x)->sxpinfo.gp)=(v)) // SET_ENVFLAGS unused
#define SET_LONG_VEC_LENGTH(x,v) (((R_long_vec_hdr_t *) (x))[-1].lv_length = (v)) // SET_LONG_VEC_LENGTH unused
#define SET_LONG_VEC_TRUELENGTH(x,v) (((R_long_vec_hdr_t *) (x))[-1].lv_truelength = (v)) // SET_LONG_VEC_TRUELENGTH unused
#define SET_MISSING(x,v) do { SEXP __x__ = (x); int __v__ = (v); int __other_flags__ = __x__->sxpinfo.gp & ~15; __x__->sxpinfo.gp = __other_flags__ | __v__; } while (0) // SET_MISSING used 1 times in sprint
#define SET_NAMED(x, v) (((x)->sxpinfo.named)=(v)) // SET_NAMED used 10 times in dplyr, yaml, data.table, iotools, RSQLite
#define SET_OBJECT(x,v) (((x)->sxpinfo.obj)=(v)) // SET_OBJECT used 32 times in RSclient, reshape2, Rserve, data.table, actuar, dplyr, proxy, rmongodb, slam, tau
#define SET_RDEBUG(x,v) (((x)->sxpinfo.debug)=(v)) // SET_RDEBUG unused
#define SET_REFCNT(x,v) do {} while(0) // SET_REFCNT unused
#define SET_RSTEP(x,v) (((x)->sxpinfo.spare)=(v)) // SET_RSTEP unused
#define SET_RTRACE(x,v) (((x)->sxpinfo.trace)=(v)) // SET_RTRACE unused
#define SET_S4_OBJECT(x) (((x)->sxpinfo.gp) |= ((unsigned short)(1<<4))) // SET_S4_OBJECT used 12 times in RSclient, redland, Rserve, data.table, FREGAT, rJPSGCS, tau
#define SET_SHORT_VEC_LENGTH SET_SHORT_VEC_LENGTH // SET_SHORT_VEC_LENGTH unused
#define SET_SHORT_VEC_TRUELENGTH SET_SHORT_VEC_TRUELENGTH // SET_SHORT_VEC_TRUELENGTH unused
#define SET_TRACKREFS(x,v) do {} while(0) // SET_TRACKREFS unused
#define SET_TRUELENGTH(x,v) do { SEXP sl__x__ = (x); R_xlen_t sl__v__ = (v); if (((((VECSEXP) (sl__x__))->vecsxp.length) == -1)) (((R_long_vec_hdr_t *) (sl__x__))[-1].lv_truelength = (sl__v__)); else ((((VECSEXP) (sl__x__))->vecsxp.truelength) = ((R_len_t) sl__v__)); } while (0) // SET_TRUELENGTH used 26 times in data.table
#define SET_TYPEOF(x,v) (((x)->sxpinfo.type)=(v)) // SET_TYPEOF used 38 times in 21 packages
#define SEXPREC_HEADER <defined> // SEXPREC_HEADER unused
#define SHORT_VEC_LENGTH(x) (((VECSEXP) (x))->vecsxp.length) // SHORT_VEC_LENGTH used 1 times in Rcpp11
#define SHORT_VEC_TRUELENGTH(x) (((VECSEXP) (x))->vecsxp.truelength) // SHORT_VEC_TRUELENGTH unused
#define SIZEOF_SIZE_T 8 // SIZEOF_SIZE_T used 1 times in PythonInR
#define SPECIALSXP 7 // SPECIALSXP used 22 times in RPostgreSQL, PythonInR, Rcpp11, purrr, seqminer, Rcpp, yaml, pryr, rtkpp, rtkore
#define STRING_ELT(x,i) ((SEXP *) (((SEXPREC_ALIGN *) (x)) + 1))[i] // STRING_ELT used 4143 times in 333 packages
#define STRING_PTR(x) ((SEXP *) (((SEXPREC_ALIGN *) (x)) + 1)) // STRING_PTR used 65 times in 14 packages
#define STRSXP 16 // STRSXP used 3247 times in 327 packages
#define SUPPORT_MBCS 1 // SUPPORT_MBCS used 1 times in bibtex
#define SUPPORT_UTF8 1 // SUPPORT_UTF8 used 3 times in tau, rindex, stringi
#define SYMSXP 1 // SYMSXP used 94 times in 25 packages
#define SYMVALUE(x) ((x)->u.symsxp.value) // SYMVALUE unused
#define ScalarComplex Rf_ScalarComplex // ScalarComplex unused
#define ScalarInteger Rf_ScalarInteger // ScalarInteger used 704 times in 88 packages
#define ScalarLogical Rf_ScalarLogical // ScalarLogical used 450 times in 64 packages
#define ScalarRaw Rf_ScalarRaw // ScalarRaw used 4 times in qtbase, RGtk2
#define ScalarReal Rf_ScalarReal // ScalarReal used 330 times in 65 packages
#define ScalarString Rf_ScalarString // ScalarString used 198 times in 37 packages
#define StringBlank Rf_StringBlank // StringBlank unused
#define StringFalse Rf_StringFalse // StringFalse used 3 times in iotools
#define StringTrue Rf_StringTrue // StringTrue used 3 times in iotools
#define TAG(e) ((e)->u.listsxp.tagval) // TAG used 513 times in 40 packages
#define TRACKREFS(x) FALSE // TRACKREFS unused
#define TRUELENGTH(x) (((((VECSEXP) (x))->vecsxp.length) == -1) ? R_BadLongVector(x, \"/var/folders/t8/1ry582nx6438y8pn6gk20f3c0000gn/T/preprocessor_test2233054488227688894.cpp\", 1341) : (((VECSEXP) (x))->vecsxp.truelength)) // TRUELENGTH used 37 times in data.table
#define TYPEOF(x) ((x)->sxpinfo.type) // TYPEOF used 2832 times in 195 packages
#define TYPE_BITS 5 // TYPE_BITS used 2 times in dplyr
#define UNPROTECT(n) Rf_unprotect(n) // UNPROTECT used 12247 times in 758 packages
#define UNPROTECT_PTR(s) Rf_unprotect_ptr(s) // UNPROTECT_PTR used 307 times in 14 packages
#define UNSET_DDVAL_BIT(x) (((x)->sxpinfo.gp) &= ~1) // UNSET_DDVAL_BIT unused
#define UNSET_S4_OBJECT(x) (((x)->sxpinfo.gp) &= ~((unsigned short)(1<<4))) // UNSET_S4_OBJECT used 2 times in data.table, slam
#define VECSXP 19 // VECSXP used 3142 times in 385 packages
#define VECTOR_ELT(x,i) ((SEXP *) (((SEXPREC_ALIGN *) (x)) + 1))[i] // VECTOR_ELT used 8626 times in 291 packages
#define VECTOR_PTR(x) ((SEXP *) (((SEXPREC_ALIGN *) (x)) + 1)) // VECTOR_PTR used 17 times in bit, AdaptFitOS, RJSONIO, Rcpp11, bit64, Rcpp, locfit, iotools
#define VectorToPairList Rf_VectorToPairList // VectorToPairList used 13 times in pomp, arules
#define WEAKREFSXP 23 // WEAKREFSXP used 19 times in seqminer, Rcpp, pryr, rtkpp, rtkore, Rcpp11
#define XLENGTH(x) (((((VECSEXP) (x))->vecsxp.length) == -1) ? ((R_long_vec_hdr_t *) (x))[-1].lv_length : (((VECSEXP) (x))->vecsxp.length)) // XLENGTH used 287 times in 21 packages
#define XTRUELENGTH(x) (((((VECSEXP) (x))->vecsxp.length) == -1) ? ((R_long_vec_hdr_t *) (x))[-1].lv_truelength : (((VECSEXP) (x))->vecsxp.truelength)) // XTRUELENGTH unused
#define acopy_string Rf_acopy_string // acopy_string used 10 times in splusTimeDate
#define addMissingVarsToNewEnv Rf_addMissingVarsToNewEnv // addMissingVarsToNewEnv unused
#define alloc3DArray Rf_alloc3DArray // alloc3DArray used 21 times in mcmc, msm, TPmsm, unfoldr, RandomFields, cplm
#define allocArray Rf_allocArray // allocArray used 24 times in unfoldr, kergp, pomp, proxy, kza, slam, mvMORPH, TPmsm, ouch, RandomFields
#define allocFormalsList2 Rf_allocFormalsList2 // allocFormalsList2 unused
#define allocFormalsList3 Rf_allocFormalsList3 // allocFormalsList3 unused
#define allocFormalsList4 Rf_allocFormalsList4 // allocFormalsList4 unused
#define allocFormalsList5 Rf_allocFormalsList5 // allocFormalsList5 unused
#define allocFormalsList6 Rf_allocFormalsList6 // allocFormalsList6 unused
#define allocList Rf_allocList // allocList used 60 times in 25 packages
#define allocMatrix Rf_allocMatrix // allocMatrix used 1577 times in 244 packages
#define allocS4Object Rf_allocS4Object // allocS4Object used 1 times in arules
#define allocSExp Rf_allocSExp // allocSExp used 14 times in igraph, rgp, data.table, RandomFields, mmap, qtbase
#define allocVector Rf_allocVector // allocVector used 12419 times in 551 packages
#define allocVector3 Rf_allocVector3 // allocVector3 unused
#define any_duplicated Rf_any_duplicated // any_duplicated used 5 times in data.table, checkmate
#define any_duplicated3 Rf_any_duplicated3 // any_duplicated3 unused
#define applyClosure Rf_applyClosure // applyClosure unused
#define arraySubscript Rf_arraySubscript // arraySubscript used 13 times in proxy, arules, arulesSequences, cba, seriation
#define asChar Rf_asChar // asChar used 194 times in 36 packages
#define asCharacterFactor Rf_asCharacterFactor // asCharacterFactor used 11 times in fastmatch, Kmisc, data.table
#define asComplex Rf_asComplex // asComplex used 1 times in ff
#define asInteger Rf_asInteger // asInteger used 1277 times in 140 packages
#define asLogical Rf_asLogical // asLogical used 462 times in 64 packages
#define asReal Rf_asReal // asReal used 383 times in 83 packages
#define asS4 Rf_asS4 // asS4 unused
#define cPsort Rf_cPsort // cPsort unused
#define classgets Rf_classgets // classgets used 91 times in 30 packages
#define coerceVector Rf_coerceVector // coerceVector used 2585 times in 167 packages
#define conformable Rf_conformable // conformable used 141 times in 22 packages
#define cons Rf_cons // cons used 609 times in 39 packages
#define copyListMatrix Rf_copyListMatrix // copyListMatrix used 1 times in Matrix
#define copyMatrix Rf_copyMatrix // copyMatrix used 7 times in BDgraph, Matrix, kza
#define copyMostAttrib Rf_copyMostAttrib // copyMostAttrib used 68 times in arules, robustbase, data.table, xts, memisc, proxy, zoo, tau
#define copyVector Rf_copyVector // copyVector used 12 times in tm, kza, mlegp, adaptivetau
#define countContexts Rf_countContexts // countContexts unused
#define defineVar Rf_defineVar // defineVar used 218 times in 38 packages
#define dimgets Rf_dimgets // dimgets used 3 times in CorrBin
#define dimnamesgets Rf_dimnamesgets // dimnamesgets used 24 times in Matrix, RxCEcolInf, lxb, sapa
#define duplicate Rf_duplicate // duplicate used 2088 times in 224 packages
#define duplicated Rf_duplicated // duplicated used 402 times in 100 packages
#define elt Rf_elt // elt used 2310 times in 37 packages
#define error Rf_error // error used 63771 times in 1109 packages
#define error_return(msg) { Rf_error(msg); return R_NilValue; } // error_return used 100 times in rpg, RPostgreSQL, Rook, git2r, grr, rJava, rmumps
#define errorcall Rf_errorcall // errorcall used 103 times in RCurl, arules, XML, arulesSequences, pbdMPI, xts, proxy, cba, rJava, RSAP
#define errorcall_return(cl,msg) { Rf_errorcall(cl, msg); return R_NilValue; } // errorcall_return used 31 times in Runuran
#define eval Rf_eval // eval used 25178 times in 269 packages
#define findFun Rf_findFun // findFun used 13 times in sprint, tikzDevice, yaml, unfoldr, TraMineR, RGtk2
#define findVar Rf_findVar // findVar used 1333 times in 24 packages
#define findVarInFrame Rf_findVarInFrame // findVarInFrame used 101 times in 13 packages
#define findVarInFrame3 Rf_findVarInFrame3 // findVarInFrame3 used 5 times in datamap
#define getAttrib Rf_getAttrib // getAttrib used 1930 times in 239 packages
#define getCharCE Rf_getCharCE // getCharCE used 16 times in ore, RSclient, PythonInR, Rserve, jsonlite, tau, rJava
#define gsetVar Rf_gsetVar // gsetVar used 4 times in RSVGTipsDevice, Cairo, RSvgDevice, JavaGD
#define iPsort Rf_iPsort // iPsort used 3 times in matrixStats, robustbase
#define inherits Rf_inherits // inherits used 814 times in 80 packages
#define install Rf_install // install used 3178 times in 224 packages
#define installChar Rf_installChar // installChar used 4 times in dplyr
#define installDDVAL Rf_installDDVAL // installDDVAL unused
#define installS3Signature Rf_installS3Signature // installS3Signature unused
#define isArray Rf_isArray // isArray used 34 times in checkmate, PythonInR, data.table, ifultools, Rblpapi, Rvcg, unfoldr, TMB, kza, qtbase
#define isBasicClass Rf_isBasicClass // isBasicClass unused
#define isBlankString Rf_isBlankString // isBlankString used 1 times in iotools
#define isByteCode(x) (((x)->sxpinfo.type)==21) // isByteCode unused
#define isComplex(s) (((s)->sxpinfo.type) == 15) // isComplex used 119 times in checkmate, PythonInR, ifultools, Rblpapi, Rcpp11, rmatio, stringi, Matrix, qtbase
#define isEnvironment(s) (((s)->sxpinfo.type) == 4) // isEnvironment used 113 times in 52 packages
#define isExpression(s) (((s)->sxpinfo.type) == 20) // isExpression used 3 times in PythonInR, Rcpp11
#define isFactor Rf_isFactor // isFactor used 42 times in checkmate, rggobi, PythonInR, data.table, Kmisc, partykit, cba, qtbase, RSQLite
#define isFrame Rf_isFrame // isFrame used 15 times in checkmate, splusTimeDate, OjaNP, PythonInR, data.table, robfilter
#define isFree Rf_isFree // isFree unused
#define isFunction Rf_isFunction // isFunction used 274 times in 43 packages
#define isInteger Rf_isInteger // isInteger used 402 times in 77 packages
#define isLanguage Rf_isLanguage // isLanguage used 63 times in PythonInR, rgp, RandomFields
#define isList Rf_isList // isList used 40 times in 11 packages
#define isLogical(s) (((s)->sxpinfo.type) == 10) // isLogical used 215 times in 53 packages
#define isMatrix Rf_isMatrix // isMatrix used 293 times in 65 packages
#define isNewList Rf_isNewList // isNewList used 103 times in 27 packages
#define isNull(s) (((s)->sxpinfo.type) == 0) // isNull used 1915 times in 119 packages
#define isNumber Rf_isNumber // isNumber used 14 times in PythonInR, readr, stringi, qtbase
#define isNumeric Rf_isNumeric // isNumeric used 468 times in 49 packages
#define isObject(s) (((s)->sxpinfo.obj) != 0) // isObject used 11 times in dplyr, Rcpp, PythonInR, Rcpp11, stringi, rmumps
#define isOrdered Rf_isOrdered // isOrdered used 65 times in partykit, PythonInR, data.table, RSQLite
#define isPairList Rf_isPairList // isPairList used 2 times in PythonInR
#define isPrimitive Rf_isPrimitive // isPrimitive used 7 times in PythonInR, qtbase
#define isReal(s) (((s)->sxpinfo.type) == 14) // isReal used 323 times in 64 packages
#define isS4 Rf_isS4 // isS4 used 13 times in PythonInR, Rcpp11, dplyr, Rcpp, catnet, rmumps, sdnet
#define isString(s) (((s)->sxpinfo.type) == 16) // isString used 280 times in 59 packages
#define isSymbol(s) (((s)->sxpinfo.type) == 1) // isSymbol used 68 times in PythonInR, data.table, Rcpp11, stringi, rgp, dbarts, rJava, sourcetools
#define isTs Rf_isTs // isTs used 2 times in PythonInR
#define isUnordered Rf_isUnordered // isUnordered used 2 times in PythonInR
#define isUnsorted Rf_isUnsorted // isUnsorted unused
#define isUserBinop Rf_isUserBinop // isUserBinop used 2 times in PythonInR
#define isValidString Rf_isValidString // isValidString used 26 times in SSN, PythonInR, foreign, pbdMPI, RJSONIO, SASxport
#define isValidStringF Rf_isValidStringF // isValidStringF used 2 times in PythonInR
#define isVector Rf_isVector // isVector used 182 times in 46 packages
#define isVectorAtomic Rf_isVectorAtomic // isVectorAtomic used 40 times in bit, matrixStats, checkmate, PythonInR, data.table, Matrix, bit64, potts, aster2, qtbase
#define isVectorList Rf_isVectorList // isVectorList used 12 times in RPostgreSQL, spsurvey, PythonInR, stringi, adaptivetau, PCICt, RandomFields
#define isVectorizable Rf_isVectorizable // isVectorizable used 3 times in PythonInR, robfilter
#define lang1 Rf_lang1 // lang1 used 30 times in 11 packages
#define lang2 Rf_lang2 // lang2 used 216 times in 75 packages
#define lang3 Rf_lang3 // lang3 used 107 times in 28 packages
#define lang4 Rf_lang4 // lang4 used 65 times in 21 packages
#define lang5 Rf_lang5 // lang5 used 11 times in PBSddesolve, GNE, SMC
#define lang6 Rf_lang6 // lang6 used 2 times in GNE
#define lastElt Rf_lastElt // lastElt unused
#define lazy_duplicate Rf_lazy_duplicate // lazy_duplicate unused
#define lcons Rf_lcons // lcons used 16 times in rmgarch
#define length(x) Rf_length(x) // length used 44060 times in 1224 packages
#define lengthgets Rf_lengthgets // lengthgets used 47 times in 11 packages
#define list1 Rf_list1 // list1 used 197 times in 11 packages
#define list2 Rf_list2 // list2 used 441 times in 12 packages
#define list3 Rf_list3 // list3 used 72 times in marked, Rdsdp, BH, svd
#define list4 Rf_list4 // list4 used 58 times in igraph, PBSddesolve, Rserve, BH, yaml, treethresh, SMC
#define list5 Rf_list5 // list5 used 63 times in Rdsdp, BH
#define listAppend Rf_listAppend // listAppend used 1 times in ore
#define match Rf_match // match used 8773 times in 388 packages
#define matchE Rf_matchE // matchE unused
#define mkChar Rf_mkChar // mkChar used 4545 times in 287 packages
#define mkCharCE Rf_mkCharCE // mkCharCE used 72 times in 15 packages
#define mkCharLen Rf_mkCharLen // mkCharLen used 38 times in 16 packages
#define mkCharLenCE Rf_mkCharLenCE // mkCharLenCE used 23 times in 11 packages
#define mkNamed Rf_mkNamed // mkNamed used 12 times in RCassandra, coxme, SamplerCompare, survival, JavaGD, DEoptim, qtbase
#define mkString Rf_mkString // mkString used 814 times in 96 packages
#define namesgets Rf_namesgets // namesgets used 80 times in 14 packages
#define ncols Rf_ncols // ncols used 3805 times in 182 packages
#define nlevels Rf_nlevels // nlevels used 546 times in 26 packages
#define nrows Rf_nrows // nrows used 4332 times in 215 packages
#define nthcdr Rf_nthcdr // nthcdr used 9 times in sprint, rmongodb, PythonInR, xts
#define pmatch Rf_pmatch // pmatch used 169 times in ore, git2r, AdaptFitOS, data.table, seqminer, locfit, oce, rmumps
#define protect Rf_protect // protect used 599 times in 101 packages
#define psmatch Rf_psmatch // psmatch used 5 times in rgl
#define rPsort Rf_rPsort // rPsort used 63 times in 15 packages
#define reEnc Rf_reEnc // reEnc used 3 times in PythonInR, RJSONIO
#define readS3VarsFromFrame Rf_readS3VarsFromFrame // readS3VarsFromFrame unused
#define revsort Rf_revsort // revsort used 60 times in 20 packages
#define rownamesgets Rf_rownamesgets // rownamesgets unused
#define setAttrib Rf_setAttrib // setAttrib used 1830 times in 251 packages
#define setIVector Rf_setIVector // setIVector unused
#define setRVector Rf_setRVector // setRVector used 3 times in RcppClassic, RcppClassicExamples
#define setSVector Rf_setSVector // setSVector unused
#define setVar Rf_setVar // setVar used 24 times in Rhpc, rscproxy, PythonInR, rgenoud, survival, gsl, littler, spatstat
#define shallow_duplicate Rf_shallow_duplicate // shallow_duplicate used 2 times in tmlenet, smint
#define str2type Rf_str2type // str2type used 1 times in RGtk2
#define stringPositionTr Rf_stringPositionTr // stringPositionTr unused
#define stringSuffix Rf_stringSuffix // stringSuffix unused
#define substitute Rf_substitute // substitute used 255 times in 56 packages
#define topenv Rf_topenv // topenv unused
#define translateChar Rf_translateChar // translateChar used 59 times in 19 packages
#define translateChar0 Rf_translateChar0 // translateChar0 unused
#define translateCharUTF8 Rf_translateCharUTF8 // translateCharUTF8 used 66 times in 13 packages
#define type2char Rf_type2char // type2char used 107 times in 12 packages
#define type2rstr Rf_type2rstr // type2rstr unused
#define type2str Rf_type2str // type2str used 3 times in Kmisc, yaml
#define type2str_nowarn Rf_type2str_nowarn // type2str_nowarn used 1 times in qrmtools
#define unprotect Rf_unprotect // unprotect used 110 times in 35 packages
#define unprotect_ptr Rf_unprotect_ptr // unprotect_ptr unused
#define warning Rf_warning // warning used 7679 times in 434 packages
#define warningcall Rf_warningcall // warningcall used 4 times in RInside, jsonlite, pbdMPI
#define warningcall_immediate Rf_warningcall_immediate // warningcall_immediate used 2 times in Runuran
#define xlength(x) Rf_xlength(x) // xlength used 186 times in stringdist, yuima, matrixStats, Rhpc, validate, checkmate, dplR, Rdsdp, pscl, DescTools
#define xlengthgets Rf_xlengthgets // xlengthgets unused
# /Users/ls/Source/git/fastr/com.oracle.truffle.r.native/include/R_ext/Arith.h
extern "C" {
extern double R_NaN; // R_NaN used 469 times in 68 packages
extern double R_PosInf; // R_PosInf used 562 times in 112 packages
extern double R_NegInf; // R_NegInf used 699 times in 105 packages
extern double R_NaReal; // R_NaReal used 140 times in 34 packages
// NA_REAL used 1667 times in 226 packages
extern int R_NaInt; // R_NaInt used 58 times in 20 packages
// NA_INTEGER used 1520 times in 183 packages
// NA_LOGICAL used 355 times in 73 packages
int R_IsNA(double); // R_IsNA used 161 times in 40 packages
int R_IsNaN(double); // R_IsNaN used 75 times in 28 packages
int R_finite(double); // R_finite used 232 times in 44 packages
int R_isnancpp(double); // R_isnancpp used 8 times in igraph, PwrGSD
}
# /Users/ls/Source/git/fastr/com.oracle.truffle.r.native/include/R_ext/Boolean.h
extern "C" {
typedef enum { FALSE = 0, TRUE } Rboolean;
}
# /Users/ls/Source/git/fastr/com.oracle.truffle.r.native/include/R_ext/Complex.h
extern "C" {
typedef struct {
double r;
double i;
} Rcomplex; // Rcomplex used 893 times in 47 packages
}
# /Users/ls/Source/git/fastr/com.oracle.truffle.r.native/include/R_ext/Error.h
extern "C" {
void __attribute__((noreturn)) Rf_error(const char *, ...);
void __attribute__((noreturn)) UNIMPLEMENTED(const char *);
void __attribute__((noreturn)) WrongArgCount(const char *);
void Rf_warning(const char *, ...); // Rf_warning used 316 times in 66 packages
// warning used 7679 times in 434 packages
void R_ShowMessage(const char *s); // R_ShowMessage used 104 times in Rserve, rJava, HiPLARM
}
# /Users/ls/Source/git/fastr/com.oracle.truffle.r.native/include/R_ext/Memory.h
extern "C" {
void* vmaxget(void); // vmaxget used 279 times in 20 packages
void vmaxset(const void *); // vmaxset used 279 times in 20 packages
void R_gc(void); // R_gc used 6 times in TMB, excel.link, gmatrix, microbenchmark
int R_gc_running(); // R_gc_running unused
char* R_alloc(size_t, int); // R_alloc used 7787 times in 330 packages
long double *R_allocLD(size_t nelem);
char* S_alloc(long, int); // S_alloc used 540 times in 50 packages
char* S_realloc(char *, long, long, int); // S_realloc used 55 times in 11 packages
}
# /Users/ls/Source/git/fastr/com.oracle.truffle.r.native/include/R_ext/Print.h
extern "C" {
void Rprintf(const char *, ...); // Rprintf used 33813 times in 729 packages
void REprintf(const char *, ...); // REprintf used 2531 times in 135 packages
}
# /Users/ls/Source/git/fastr/com.oracle.truffle.r.native/include/R_ext/Utils.h
extern "C" {
void R_isort(int*, int); // R_isort used 45 times in 18 packages
void R_rsort(double*, int); // R_rsort used 210 times in 29 packages
void R_csort(Rcomplex*, int); // R_csort unused
void rsort_with_index(double *, int *, int); // rsort_with_index used 77 times in 40 packages
void Rf_revsort(double*, int*, int); // Rf_revsort unused
// revsort used 60 times in 20 packages
void Rf_iPsort(int*, int, int); // Rf_iPsort unused
// iPsort used 3 times in matrixStats, robustbase
void Rf_rPsort(double*, int, int); // Rf_rPsort unused
// rPsort used 63 times in 15 packages
void Rf_cPsort(Rcomplex*, int, int); // Rf_cPsort unused
// cPsort unused
void R_qsort (double *v, size_t i, size_t j); // R_qsort used 10 times in extWeibQuant, pomp, robustbase, dplR, tclust, pcaPP
void R_qsort_I (double *v, int *II, int i, int j); // R_qsort_I used 33 times in 15 packages
void R_qsort_int (int *iv, size_t i, size_t j); // R_qsort_int unused
void R_qsort_int_I(int *iv, int *II, int i, int j); // R_qsort_int_I used 19 times in ff, matrixStats, arules, Rborist, slam, eco, bnlearn
const char *R_ExpandFileName(const char *); // R_ExpandFileName used 42 times in 20 packages
void Rf_setIVector(int*, int, int); // Rf_setIVector unused
// setIVector unused
void Rf_setRVector(double*, int, double); // Rf_setRVector unused
// setRVector used 3 times in RcppClassic, RcppClassicExamples
Rboolean Rf_StringFalse(const char *); // Rf_StringFalse unused
// StringFalse used 3 times in iotools
Rboolean Rf_StringTrue(const char *); // Rf_StringTrue unused
// StringTrue used 3 times in iotools
Rboolean Rf_isBlankString(const char *); // Rf_isBlankString unused
// isBlankString used 1 times in iotools
double R_atof(const char *str); // R_atof used 9 times in SSN, tree, foreign, iotools
double R_strtod(const char *c, char **end); // R_strtod used 4 times in ape, iotools
char *R_tmpnam(const char *prefix, const char *tempdir); // R_tmpnam used 2 times in geometry
char *R_tmpnam2(const char *prefix, const char *tempdir, const char *fileext); // R_tmpnam2 unused
void R_CheckUserInterrupt(void); // R_CheckUserInterrupt used 1487 times in 234 packages
void R_CheckStack(void); // R_CheckStack used 115 times in vcrpart, actuar, cplm, lme4, Matrix, GNE, randtoolbox, HiPLARM, rngWELL, pedigreemm
void R_CheckStack2(size_t); // R_CheckStack2 unused
int findInterval(double *xt, int n, double x, // findInterval used 11 times in BSquare, DNAprofiles, unfoldr, chebpol, pomp, eco, protViz, PBSmapping, spatstat
Rboolean rightmost_closed, Rboolean all_inside, int ilo,
int *mflag);
void find_interv_vec(double *xt, int *n, double *x, int *nx, // find_interv_vec unused
int *rightmost_closed, int *all_inside, int *indx);
void R_max_col(double *matrix, int *nr, int *nc, int *maxes, int *ties_meth); // R_max_col used 2 times in geostatsp, MNP
}
# /Users/ls/Source/git/fastr/com.oracle.truffle.r.native/include/Rinternals.h
extern "C" {
typedef unsigned char Rbyte;
typedef int R_len_t; // R_len_t used 2397 times in 70 packages
typedef ptrdiff_t R_xlen_t; // R_xlen_t used 1537 times in 32 packages
typedef struct { R_xlen_t lv_length, lv_truelength; } R_long_vec_hdr_t;
typedef unsigned int SEXPTYPE;
struct sxpinfo_struct {
SEXPTYPE type : 5;
unsigned int obj : 1;
unsigned int named : 2;
unsigned int gp : 16;
unsigned int mark : 1;
unsigned int debug : 1;
unsigned int trace : 1;
unsigned int spare : 1;
unsigned int gcgen : 1;
unsigned int gccls : 3;
};
struct vecsxp_struct {
R_len_t length;
R_len_t truelength;
};
struct primsxp_struct {
int offset;
};
struct symsxp_struct {
struct SEXPREC *pname;
struct SEXPREC *value;
struct SEXPREC *internal;
};
struct listsxp_struct {
struct SEXPREC *carval;
struct SEXPREC *cdrval;
struct SEXPREC *tagval;
};
struct envsxp_struct {
struct SEXPREC *frame;
struct SEXPREC *enclos;
struct SEXPREC *hashtab;
};
struct closxp_struct {
struct SEXPREC *formals;
struct SEXPREC *body;
struct SEXPREC *env;
};
struct promsxp_struct {
struct SEXPREC *value;
struct SEXPREC *expr;
struct SEXPREC *env;
};
typedef struct SEXPREC {
struct sxpinfo_struct sxpinfo; struct SEXPREC *attrib; struct SEXPREC *gengc_next_node, *gengc_prev_node;
union {
struct primsxp_struct primsxp;
struct symsxp_struct symsxp;
struct listsxp_struct listsxp;
struct envsxp_struct envsxp;
struct closxp_struct closxp;
struct promsxp_struct promsxp;
} u; // u unused
} SEXPREC, *SEXP;
typedef struct VECTOR_SEXPREC {
struct sxpinfo_struct sxpinfo; struct SEXPREC *attrib; struct SEXPREC *gengc_next_node, *gengc_prev_node;
struct vecsxp_struct vecsxp;
} VECTOR_SEXPREC, *VECSEXP;
typedef union { VECTOR_SEXPREC s; double align; } SEXPREC_ALIGN;
R_len_t __attribute__((noreturn)) R_BadLongVector(SEXP, const char *, int);
SEXP (ATTRIB)(SEXP x); // ATTRIB used 83 times in 20 packages
int (OBJECT)(SEXP x); // OBJECT used 102 times in 28 packages
int (MARK)(SEXP x); // MARK used 251 times in 21 packages
int (TYPEOF)(SEXP x); // TYPEOF used 2832 times in 195 packages
int (NAMED)(SEXP x); // NAMED used 62 times in 22 packages
int (REFCNT)(SEXP x); // REFCNT unused
void (SET_OBJECT)(SEXP x, int v); // SET_OBJECT used 32 times in RSclient, reshape2, Rserve, data.table, actuar, dplyr, proxy, rmongodb, slam, tau
void (SET_TYPEOF)(SEXP x, int v); // SET_TYPEOF used 38 times in 21 packages
void (SET_NAMED)(SEXP x, int v); // SET_NAMED used 10 times in dplyr, yaml, data.table, iotools, RSQLite
void SET_ATTRIB(SEXP x, SEXP v); // SET_ATTRIB used 54 times in 18 packages
void DUPLICATE_ATTRIB(SEXP to, SEXP from); // DUPLICATE_ATTRIB used 5 times in covr, lfe, testthat, data.table
int (IS_S4_OBJECT)(SEXP x); // IS_S4_OBJECT used 23 times in Rmosek, Runuran, data.table, xts, Matrix, slam, zoo, HiPLARM, OpenMx, tau
void (SET_S4_OBJECT)(SEXP x); // SET_S4_OBJECT used 12 times in RSclient, redland, Rserve, data.table, FREGAT, rJPSGCS, tau
void (UNSET_S4_OBJECT)(SEXP x); // UNSET_S4_OBJECT used 2 times in data.table, slam
int (LENGTH)(SEXP x); // LENGTH used 5845 times in 356 packages
int (TRUELENGTH)(SEXP x); // TRUELENGTH used 37 times in data.table
void (SETLENGTH)(SEXP x, int v); // SETLENGTH used 65 times in 11 packages
void (SET_TRUELENGTH)(SEXP x, int v); // SET_TRUELENGTH used 26 times in data.table
R_xlen_t (XLENGTH)(SEXP x); // XLENGTH used 287 times in 21 packages
R_xlen_t (XTRUELENGTH)(SEXP x); // XTRUELENGTH unused
int (IS_LONG_VEC)(SEXP x); // IS_LONG_VEC used 1 times in RProtoBuf
int (LEVELS)(SEXP x); // LEVELS used 18 times in rtdists, rPref, BsMD, data.table, stringi, dplyr, OBsMD, pbdZMQ, astrochron, RandomFields
int (SETLEVELS)(SEXP x, int v); // SETLEVELS used 2 times in Rcpp11
int *(LOGICAL)(SEXP x); // LOGICAL used 4473 times in 288 packages
int *(INTEGER)(SEXP x); // INTEGER used 41659 times in 758 packages
Rbyte *(RAW)(SEXP x); // RAW used 880 times in 99 packages
double *(REAL)(SEXP x); // REAL used 30947 times in 687 packages
Rcomplex *(COMPLEX)(SEXP x); // COMPLEX used 1697 times in 71 packages
SEXP (STRING_ELT)(SEXP x, R_xlen_t i); // STRING_ELT used 4143 times in 333 packages
SEXP (VECTOR_ELT)(SEXP x, R_xlen_t i); // VECTOR_ELT used 8626 times in 291 packages
void SET_STRING_ELT(SEXP x, R_xlen_t i, SEXP v); // SET_STRING_ELT used 5834 times in 321 packages
SEXP SET_VECTOR_ELT(SEXP x, R_xlen_t i, SEXP v); // SET_VECTOR_ELT used 9751 times in 391 packages
SEXP *(STRING_PTR)(SEXP x); // STRING_PTR used 65 times in 14 packages
SEXP * __attribute__((noreturn)) (VECTOR_PTR)(SEXP x);
SEXP (TAG)(SEXP e); // TAG used 513 times in 40 packages
SEXP (CAR)(SEXP e); // CAR used 575 times in 63 packages
SEXP (CDR)(SEXP e); // CDR used 4523 times in 76 packages
SEXP (CAAR)(SEXP e); // CAAR unused
SEXP (CDAR)(SEXP e); // CDAR unused
SEXP (CADR)(SEXP e); // CADR used 104 times in 17 packages
SEXP (CDDR)(SEXP e); // CDDR used 52 times in Rlabkey, Rcpp11, dplyr, proxy, Rcpp, slam, tikzDevice, OpenCL, svd
SEXP (CDDDR)(SEXP e); // CDDDR unused
SEXP (CADDR)(SEXP e); // CADDR used 52 times in 11 packages
SEXP (CADDDR)(SEXP e); // CADDDR used 21 times in RPostgreSQL, foreign, actuar, bibtex
SEXP (CAD4R)(SEXP e); // CAD4R used 14 times in earth, foreign, actuar
int (MISSING)(SEXP x); // MISSING used 125 times in 25 packages
void (SET_MISSING)(SEXP x, int v); // SET_MISSING used 1 times in sprint
void SET_TAG(SEXP x, SEXP y); // SET_TAG used 200 times in 34 packages
SEXP SETCAR(SEXP x, SEXP y); // SETCAR used 4072 times in 47 packages
SEXP SETCDR(SEXP x, SEXP y); // SETCDR used 46 times in 14 packages
SEXP SETCADR(SEXP x, SEXP y); // SETCADR used 112 times in 37 packages
SEXP SETCADDR(SEXP x, SEXP y); // SETCADDR used 45 times in 14 packages
SEXP SETCADDDR(SEXP x, SEXP y); // SETCADDDR used 31 times in 12 packages
SEXP SETCAD4R(SEXP e, SEXP y); // SETCAD4R used 15 times in kergp, Sim.DiffProc, tikzDevice
SEXP CONS_NR(SEXP a, SEXP b); // CONS_NR unused
SEXP (FORMALS)(SEXP x); // FORMALS used 15 times in qtpaint, RSclient, PBSddesolve, Rserve, covr, pryr, rgp, testthat, RandomFields
SEXP (BODY)(SEXP x); // BODY used 48 times in 15 packages
SEXP (CLOENV)(SEXP x); // CLOENV used 23 times in Rcpp11, covr, pomp, Rcpp, pryr, testthat, qtbase
int (RDEBUG)(SEXP x); // RDEBUG used 69 times in rmetasim
int (RSTEP)(SEXP x); // RSTEP unused
int (RTRACE)(SEXP x); // RTRACE unused
void (SET_RDEBUG)(SEXP x, int v); // SET_RDEBUG unused
void (SET_RSTEP)(SEXP x, int v); // SET_RSTEP unused
void (SET_RTRACE)(SEXP x, int v); // SET_RTRACE unused
void SET_FORMALS(SEXP x, SEXP v); // SET_FORMALS used 5 times in covr, rgp, testthat, qtbase
void SET_BODY(SEXP x, SEXP v); // SET_BODY used 6 times in covr, rgp, testthat, qtbase
void SET_CLOENV(SEXP x, SEXP v); // SET_CLOENV used 6 times in covr, rgp, testthat, qtbase
SEXP (PRINTNAME)(SEXP x); // PRINTNAME used 92 times in 29 packages
SEXP (SYMVALUE)(SEXP x); // SYMVALUE unused
SEXP (INTERNAL)(SEXP x); // INTERNAL used 1014 times in 63 packages
int (DDVAL)(SEXP x); // DDVAL unused
void (SET_DDVAL)(SEXP x, int v); // SET_DDVAL unused
void SET_PRINTNAME(SEXP x, SEXP v); // SET_PRINTNAME unused
void SET_SYMVALUE(SEXP x, SEXP v); // SET_SYMVALUE unused
void SET_INTERNAL(SEXP x, SEXP v); // SET_INTERNAL unused
SEXP (FRAME)(SEXP x); // FRAME used 19 times in deTestSet, IRISSeismic, pryr, BayesBridge, datamap, BayesLogit
SEXP (ENCLOS)(SEXP x); // ENCLOS used 7 times in Rcpp, pryr, rJava, Rcpp11, RGtk2
SEXP (HASHTAB)(SEXP x); // HASHTAB used 12 times in Rcpp, pryr, datamap, Rcpp11, qtbase
int (ENVFLAGS)(SEXP x); // ENVFLAGS unused
void (SET_ENVFLAGS)(SEXP x, int v); // SET_ENVFLAGS unused
void SET_FRAME(SEXP x, SEXP v); // SET_FRAME used 4 times in rgp, mmap, qtbase
void SET_ENCLOS(SEXP x, SEXP v); // SET_ENCLOS used 7 times in rgp, RandomFields, mmap, qtbase
void SET_HASHTAB(SEXP x, SEXP v); // SET_HASHTAB used 5 times in rgp, mmap, qtbase
SEXP (PRCODE)(SEXP x); // PRCODE used 15 times in dplyr, Rcpp, pryr, Rcpp11
SEXP (PRENV)(SEXP x); // PRENV used 14 times in igraph, dplyr, Rcpp, pryr, Rcpp11, lazyeval
SEXP (PRVALUE)(SEXP x); // PRVALUE used 12 times in dplyr, Rcpp, pryr, Rcpp11
int (PRSEEN)(SEXP x); // PRSEEN used 4 times in Rcpp, Rcpp11
void (SET_PRSEEN)(SEXP x, int v); // SET_PRSEEN unused
void SET_PRENV(SEXP x, SEXP v); // SET_PRENV unused
void SET_PRVALUE(SEXP x, SEXP v); // SET_PRVALUE unused
void SET_PRCODE(SEXP x, SEXP v); // SET_PRCODE unused
void SET_PRSEEN(SEXP x, int v); // SET_PRSEEN unused
int (HASHASH)(SEXP x); // HASHASH unused
int (HASHVALUE)(SEXP x); // HASHVALUE unused
void (SET_HASHASH)(SEXP x, int v); // SET_HASHASH unused
void (SET_HASHVALUE)(SEXP x, int v); // SET_HASHVALUE unused
typedef int PROTECT_INDEX; // PROTECT_INDEX used 94 times in 27 packages
extern SEXP R_GlobalEnv; // R_GlobalEnv used 1400 times in 79 packages
extern SEXP R_EmptyEnv; // R_EmptyEnv used 16 times in Rserve, dplR, Rcpp11, Rcpp, RcppClassic, pryr, rJava, adaptivetau, qtbase
extern SEXP R_BaseEnv; // R_BaseEnv used 27 times in 15 packages
extern SEXP R_BaseNamespace; // R_BaseNamespace used 3 times in Rcpp, Rcpp11
extern SEXP R_NamespaceRegistry; // R_NamespaceRegistry used 3 times in devtools, namespace, Rcpp
extern SEXP R_Srcref; // R_Srcref unused
extern SEXP R_NilValue; // R_NilValue used 10178 times in 491 packages
extern SEXP R_UnboundValue; // R_UnboundValue used 73 times in 23 packages
extern SEXP R_MissingArg; // R_MissingArg used 21 times in 12 packages
extern
SEXP R_RestartToken; // R_RestartToken unused
extern SEXP R_baseSymbol; // R_baseSymbol unused
extern SEXP R_BaseSymbol; // R_BaseSymbol unused
extern SEXP R_BraceSymbol; // R_BraceSymbol unused
extern SEXP R_Bracket2Symbol; // R_Bracket2Symbol used 4 times in purrr
extern SEXP R_BracketSymbol; // R_BracketSymbol unused
extern SEXP R_ClassSymbol; // R_ClassSymbol used 311 times in 84 packages
extern SEXP R_DeviceSymbol; // R_DeviceSymbol unused
extern SEXP R_DimNamesSymbol; // R_DimNamesSymbol used 230 times in 51 packages
extern SEXP R_DimSymbol; // R_DimSymbol used 1015 times in 170 packages
extern SEXP R_DollarSymbol; // R_DollarSymbol used 6 times in dplyr, Rcpp, Rcpp11
extern SEXP R_DotsSymbol; // R_DotsSymbol used 13 times in RPostgreSQL, RcppDE, lbfgs, purrr, RMySQL, DEoptim, qtbase
extern SEXP R_DoubleColonSymbol; // R_DoubleColonSymbol unused
extern SEXP R_DropSymbol; // R_DropSymbol unused
extern SEXP R_LastvalueSymbol; // R_LastvalueSymbol unused
extern SEXP R_LevelsSymbol; // R_LevelsSymbol used 51 times in 17 packages
extern SEXP R_ModeSymbol; // R_ModeSymbol unused
extern SEXP R_NaRmSymbol; // R_NaRmSymbol used 2 times in dplyr
extern SEXP R_NameSymbol; // R_NameSymbol used 2 times in qtbase
extern SEXP R_NamesSymbol; // R_NamesSymbol used 1373 times in 249 packages
extern SEXP R_NamespaceEnvSymbol; // R_NamespaceEnvSymbol unused
extern SEXP R_PackageSymbol; // R_PackageSymbol used 2 times in Rmosek, HiPLARM
extern SEXP R_PreviousSymbol; // R_PreviousSymbol unused
extern SEXP R_QuoteSymbol; // R_QuoteSymbol unused
extern SEXP R_RowNamesSymbol; // R_RowNamesSymbol used 97 times in 37 packages
extern SEXP R_SeedsSymbol; // R_SeedsSymbol used 2 times in treatSens
extern SEXP R_SortListSymbol; // R_SortListSymbol unused
extern SEXP R_SourceSymbol; // R_SourceSymbol unused
extern SEXP R_SpecSymbol; // R_SpecSymbol unused
extern SEXP R_TripleColonSymbol; // R_TripleColonSymbol unused
extern SEXP R_TspSymbol; // R_TspSymbol unused
extern SEXP R_dot_defined; // R_dot_defined unused
extern SEXP R_dot_Method; // R_dot_Method unused
extern SEXP R_dot_packageName; // R_dot_packageName unused
extern SEXP R_dot_target; // R_dot_target unused
extern SEXP R_NaString; // R_NaString used 36 times in stringdist, RCurl, RSclient, uniqueAtomMat, XML, Rserve, Rblpapi, SoundexBR, rJava, iotools
// NA_STRING used 574 times in 90 packages
extern SEXP R_BlankString; // R_BlankString used 39 times in 13 packages
extern SEXP R_BlankScalarString; // R_BlankScalarString unused
SEXP R_GetCurrentSrcref(int); // R_GetCurrentSrcref unused
SEXP R_GetSrcFilename(SEXP); // R_GetSrcFilename unused
SEXP Rf_asChar(SEXP); // Rf_asChar used 246 times in 16 packages
// asChar used 194 times in 36 packages
SEXP Rf_coerceVector(SEXP, SEXPTYPE); // Rf_coerceVector used 44 times in 13 packages
// coerceVector used 2585 times in 167 packages
SEXP Rf_PairToVectorList(SEXP x); // Rf_PairToVectorList unused
// PairToVectorList used 7 times in cba, rcdd
SEXP Rf_VectorToPairList(SEXP x); // Rf_VectorToPairList unused
// VectorToPairList used 13 times in pomp, arules
SEXP Rf_asCharacterFactor(SEXP x); // Rf_asCharacterFactor used 3 times in tidyr, reshape2, RSQLite
// asCharacterFactor used 11 times in fastmatch, Kmisc, data.table
int Rf_asLogical(SEXP x); // Rf_asLogical used 45 times in 11 packages
// asLogical used 462 times in 64 packages
int Rf_asInteger(SEXP x); // Rf_asInteger used 746 times in 23 packages
// asInteger used 1277 times in 140 packages
double Rf_asReal(SEXP x); // Rf_asReal used 113 times in 17 packages
// asReal used 383 times in 83 packages
Rcomplex Rf_asComplex(SEXP x); // Rf_asComplex unused
// asComplex used 1 times in ff
typedef struct R_allocator R_allocator_t;
char * Rf_acopy_string(const char *); // Rf_acopy_string unused
// acopy_string used 10 times in splusTimeDate
void Rf_addMissingVarsToNewEnv(SEXP, SEXP); // Rf_addMissingVarsToNewEnv unused
// addMissingVarsToNewEnv unused
SEXP Rf_alloc3DArray(SEXPTYPE, int, int, int); // Rf_alloc3DArray unused
// alloc3DArray used 21 times in mcmc, msm, TPmsm, unfoldr, RandomFields, cplm
SEXP Rf_allocArray(SEXPTYPE, SEXP); // Rf_allocArray used 4 times in h5
// allocArray used 24 times in unfoldr, kergp, pomp, proxy, kza, slam, mvMORPH, TPmsm, ouch, RandomFields
SEXP Rf_allocFormalsList2(SEXP sym1, SEXP sym2); // Rf_allocFormalsList2 unused
// allocFormalsList2 unused
SEXP Rf_allocFormalsList3(SEXP sym1, SEXP sym2, SEXP sym3); // Rf_allocFormalsList3 unused
// allocFormalsList3 unused
SEXP Rf_allocFormalsList4(SEXP sym1, SEXP sym2, SEXP sym3, SEXP sym4); // Rf_allocFormalsList4 unused
// allocFormalsList4 unused
SEXP Rf_allocFormalsList5(SEXP sym1, SEXP sym2, SEXP sym3, SEXP sym4, SEXP sym5); // Rf_allocFormalsList5 unused
// allocFormalsList5 unused
SEXP Rf_allocFormalsList6(SEXP sym1, SEXP sym2, SEXP sym3, SEXP sym4, SEXP sym5, SEXP sym6); // Rf_allocFormalsList6 unused
// allocFormalsList6 unused
SEXP Rf_allocMatrix(SEXPTYPE, int, int); // Rf_allocMatrix used 122 times in 14 packages
// allocMatrix used 1577 times in 244 packages
SEXP Rf_allocList(int); // Rf_allocList unused
// allocList used 60 times in 25 packages
SEXP Rf_allocS4Object(void); // Rf_allocS4Object used 2 times in Rserve, RSclient
// allocS4Object used 1 times in arules
SEXP Rf_allocSExp(SEXPTYPE); // Rf_allocSExp unused
// allocSExp used 14 times in igraph, rgp, data.table, RandomFields, mmap, qtbase
SEXP Rf_allocVector3(SEXPTYPE, R_xlen_t, R_allocator_t*); // Rf_allocVector3 unused
// allocVector3 unused
R_xlen_t Rf_any_duplicated(SEXP x, Rboolean from_last); // Rf_any_duplicated unused
// any_duplicated used 5 times in data.table, checkmate
R_xlen_t Rf_any_duplicated3(SEXP x, SEXP incomp, Rboolean from_last); // Rf_any_duplicated3 unused
// any_duplicated3 unused
SEXP Rf_applyClosure(SEXP, SEXP, SEXP, SEXP, SEXP); // Rf_applyClosure unused
// applyClosure unused
SEXP Rf_arraySubscript(int, SEXP, SEXP, SEXP (*)(SEXP,SEXP),
SEXP (*)(SEXP, int), SEXP);
SEXP Rf_classgets(SEXP, SEXP); // Rf_classgets used 27 times in fts, clpAPI, cplexAPI, sybilSBML, Rblpapi, glpkAPI
// classgets used 91 times in 30 packages
SEXP Rf_cons(SEXP, SEXP); // Rf_cons used 39 times in dplyr, Rcpp, Rcpp11
// cons used 609 times in 39 packages
void Rf_copyMatrix(SEXP, SEXP, Rboolean); // Rf_copyMatrix used 8 times in CNVassoc
// copyMatrix used 7 times in BDgraph, Matrix, kza
void Rf_copyListMatrix(SEXP, SEXP, Rboolean); // Rf_copyListMatrix unused
// copyListMatrix used 1 times in Matrix
void Rf_copyMostAttrib(SEXP, SEXP); // Rf_copyMostAttrib used 8 times in tidyr, purrr, Rcpp, reshape2
// copyMostAttrib used 68 times in arules, robustbase, data.table, xts, memisc, proxy, zoo, tau
void Rf_copyVector(SEXP, SEXP); // Rf_copyVector unused
// copyVector used 12 times in tm, kza, mlegp, adaptivetau
int Rf_countContexts(int, int); // Rf_countContexts unused
// countContexts unused
SEXP Rf_CreateTag(SEXP); // Rf_CreateTag unused
// CreateTag used 1 times in rgp
void Rf_defineVar(SEXP, SEXP, SEXP); // Rf_defineVar used 7 times in purrr, Rcpp, Rserve, Rcpp11
// defineVar used 218 times in 38 packages
SEXP Rf_dimgets(SEXP, SEXP); // Rf_dimgets unused
// dimgets used 3 times in CorrBin
SEXP Rf_dimnamesgets(SEXP, SEXP); // Rf_dimnamesgets unused
// dimnamesgets used 24 times in Matrix, RxCEcolInf, lxb, sapa
SEXP Rf_DropDims(SEXP); // Rf_DropDims unused
// DropDims unused
SEXP Rf_duplicate(SEXP); // Rf_duplicate used 21 times in XML, data.table, Rcpp11, lme4, dplyr, Rcpp, RcppClassic, grr, NMF, copula
// duplicate used 2088 times in 224 packages
SEXP Rf_shallow_duplicate(SEXP); // Rf_shallow_duplicate unused
// shallow_duplicate used 2 times in tmlenet, smint
SEXP Rf_lazy_duplicate(SEXP); // Rf_lazy_duplicate unused
// lazy_duplicate unused
SEXP Rf_duplicated(SEXP, Rboolean); // Rf_duplicated unused
// duplicated used 402 times in 100 packages
Rboolean R_envHasNoSpecialSymbols(SEXP); // R_envHasNoSpecialSymbols unused
SEXP Rf_eval(SEXP, SEXP); // Rf_eval used 105 times in 24 packages
// eval used 25178 times in 269 packages
SEXP Rf_findFun(SEXP, SEXP); // Rf_findFun used 7 times in Rcpp, Rcpp11, littler, RGtk2
// findFun used 13 times in sprint, tikzDevice, yaml, unfoldr, TraMineR, RGtk2
SEXP Rf_findVar(SEXP, SEXP); // Rf_findVar used 19 times in R2SWF, Rcpp11, dplyr, Rcpp, pryr, rJava, littler, showtext
// findVar used 1333 times in 24 packages
SEXP Rf_findVarInFrame(SEXP, SEXP); // Rf_findVarInFrame used 7 times in RCurl, Rcpp, Rcpp11
// findVarInFrame used 101 times in 13 packages
SEXP Rf_findVarInFrame3(SEXP, SEXP, Rboolean); // Rf_findVarInFrame3 used 1 times in pryr
// findVarInFrame3 used 5 times in datamap
SEXP Rf_getAttrib(SEXP, SEXP); // Rf_getAttrib used 256 times in 36 packages
// getAttrib used 1930 times in 239 packages
SEXP Rf_GetArrayDimnames(SEXP); // Rf_GetArrayDimnames unused
// GetArrayDimnames unused
SEXP Rf_GetColNames(SEXP); // Rf_GetColNames unused
// GetColNames unused
void Rf_GetMatrixDimnames(SEXP, SEXP*, SEXP*, const char**, const char**); // Rf_GetMatrixDimnames unused
// GetMatrixDimnames used 2 times in Kmisc, optmatch
SEXP Rf_GetOption(SEXP, SEXP); // Rf_GetOption unused
// GetOption used 5 times in rgl, gmp, Cairo, RGtk2
SEXP Rf_GetOption1(SEXP); // Rf_GetOption1 used 5 times in RProtoBuf, gmp
// GetOption1 used 1 times in PCICt
int Rf_GetOptionDigits(void); // Rf_GetOptionDigits unused
// GetOptionDigits unused
int Rf_GetOptionWidth(void); // Rf_GetOptionWidth used 1 times in progress
// GetOptionWidth unused
SEXP Rf_GetRowNames(SEXP); // Rf_GetRowNames unused
// GetRowNames unused
void Rf_gsetVar(SEXP, SEXP, SEXP); // Rf_gsetVar unused
// gsetVar used 4 times in RSVGTipsDevice, Cairo, RSvgDevice, JavaGD
SEXP Rf_install(const char *); // Rf_install used 990 times in 50 packages
// install used 3178 times in 224 packages
SEXP Rf_installChar(SEXP); // Rf_installChar used 15 times in dplyr, Rcpp
// installChar used 4 times in dplyr
SEXP Rf_installDDVAL(int i); // Rf_installDDVAL unused
// installDDVAL unused
SEXP Rf_installS3Signature(const char *, const char *); // Rf_installS3Signature unused
// installS3Signature unused
Rboolean Rf_isFree(SEXP); // Rf_isFree unused
// isFree unused
Rboolean Rf_isOrdered(SEXP); // Rf_isOrdered unused
// isOrdered used 65 times in partykit, PythonInR, data.table, RSQLite
Rboolean Rf_isUnordered(SEXP); // Rf_isUnordered used 1 times in OpenMx
// isUnordered used 2 times in PythonInR
Rboolean Rf_isUnsorted(SEXP, Rboolean); // Rf_isUnsorted unused
// isUnsorted unused
SEXP Rf_lengthgets(SEXP, R_len_t); // Rf_lengthgets used 7 times in readxl, readr
// lengthgets used 47 times in 11 packages
SEXP Rf_xlengthgets(SEXP, R_xlen_t); // Rf_xlengthgets unused
// xlengthgets unused
SEXP R_lsInternal(SEXP, Rboolean); // R_lsInternal used 5 times in Rcpp, rJava, Rcpp11, qtbase
SEXP R_lsInternal3(SEXP, Rboolean, Rboolean); // R_lsInternal3 unused
SEXP Rf_match(SEXP, SEXP, int); // Rf_match used 2 times in Rvcg
// match used 8773 times in 388 packages
SEXP Rf_matchE(SEXP, SEXP, int, SEXP); // Rf_matchE unused
// matchE unused
SEXP Rf_namesgets(SEXP, SEXP); // Rf_namesgets used 4 times in OpenMx, rpf
// namesgets used 80 times in 14 packages
SEXP Rf_mkChar(const char *); // Rf_mkChar used 517 times in 32 packages
// mkChar used 4545 times in 287 packages
SEXP Rf_mkCharLen(const char *, int); // Rf_mkCharLen used 21 times in refGenome, redland, Rcpp11, stringi, Kmisc, Rcpp, sourcetools, iotools
// mkCharLen used 38 times in 16 packages
Rboolean Rf_NonNullStringMatch(SEXP, SEXP); // Rf_NonNullStringMatch unused
// NonNullStringMatch used 8 times in proxy, arules, arulesSequences, cba
int Rf_ncols(SEXP); // Rf_ncols used 22 times in fdaPDE, fts, BoomSpikeSlab, Rmosek, ccgarch, rcppbugs, biganalytics, CEC, OpenMx, RTriangle
// ncols used 3805 times in 182 packages
int Rf_nrows(SEXP); // Rf_nrows used 32 times in 12 packages
// nrows used 4332 times in 215 packages
SEXP Rf_nthcdr(SEXP, int); // Rf_nthcdr unused
// nthcdr used 9 times in sprint, rmongodb, PythonInR, xts
typedef enum {Bytes, Chars, Width} nchar_type;
int R_nchar(SEXP string, nchar_type type_, // R_nchar unused
Rboolean allowNA, Rboolean keepNA, const char* msg_name);
Rboolean Rf_pmatch(SEXP, SEXP, Rboolean); // Rf_pmatch unused
// pmatch used 169 times in ore, git2r, AdaptFitOS, data.table, seqminer, locfit, oce, rmumps
Rboolean Rf_psmatch(const char *, const char *, Rboolean); // Rf_psmatch unused
// psmatch used 5 times in rgl
void Rf_PrintValue(SEXP); // Rf_PrintValue used 95 times in 19 packages
// PrintValue used 119 times in 13 packages
void Rf_readS3VarsFromFrame(SEXP, SEXP*, SEXP*, SEXP*, SEXP*, SEXP*, SEXP*); // Rf_readS3VarsFromFrame unused
// readS3VarsFromFrame unused
SEXP Rf_setAttrib(SEXP, SEXP, SEXP); // Rf_setAttrib used 325 times in 35 packages
// setAttrib used 1830 times in 251 packages
void Rf_setSVector(SEXP*, int, SEXP); // Rf_setSVector unused
// setSVector unused
void Rf_setVar(SEXP, SEXP, SEXP); // Rf_setVar used 1 times in showtext
// setVar used 24 times in Rhpc, rscproxy, PythonInR, rgenoud, survival, gsl, littler, spatstat
SEXP Rf_stringSuffix(SEXP, int); // Rf_stringSuffix unused
// stringSuffix unused
SEXPTYPE Rf_str2type(const char *); // Rf_str2type used 4 times in purrr
// str2type used 1 times in RGtk2
Rboolean Rf_StringBlank(SEXP); // Rf_StringBlank used 1 times in LCMCR
// StringBlank unused
SEXP Rf_substitute(SEXP,SEXP); // Rf_substitute unused
// substitute used 255 times in 56 packages
const char * Rf_translateChar(SEXP); // Rf_translateChar used 1 times in devEMF
// translateChar used 59 times in 19 packages
const char * Rf_translateChar0(SEXP); // Rf_translateChar0 unused
// translateChar0 unused
const char * Rf_translateCharUTF8(SEXP); // Rf_translateCharUTF8 used 22 times in Rserve, xml2, readr, gdtools, Rcpp11, dplyr, Rcpp, haven
// translateCharUTF8 used 66 times in 13 packages
const char * Rf_type2char(SEXPTYPE); // Rf_type2char used 33 times in 13 packages
// type2char used 107 times in 12 packages
SEXP Rf_type2rstr(SEXPTYPE); // Rf_type2rstr unused
// type2rstr unused
SEXP Rf_type2str(SEXPTYPE); // Rf_type2str used 4 times in Rcpp, pryr
// type2str used 3 times in Kmisc, yaml
SEXP Rf_type2str_nowarn(SEXPTYPE); // Rf_type2str_nowarn unused
// type2str_nowarn used 1 times in qrmtools
void Rf_unprotect_ptr(SEXP); // Rf_unprotect_ptr unused
// unprotect_ptr unused
void __attribute__((noreturn)) R_signal_protect_error(void);
void __attribute__((noreturn)) R_signal_unprotect_error(void);
void __attribute__((noreturn)) R_signal_reprotect_error(PROTECT_INDEX i);
SEXP R_tryEval(SEXP, SEXP, int *); // R_tryEval used 1118 times in 24 packages
SEXP R_tryEvalSilent(SEXP, SEXP, int *); // R_tryEvalSilent unused
const char *R_curErrorBuf(); // R_curErrorBuf used 4 times in Rhpc, Rcpp11
Rboolean Rf_isS4(SEXP); // Rf_isS4 used 16 times in Rcpp, Rcpp11
// isS4 used 13 times in PythonInR, Rcpp11, dplyr, Rcpp, catnet, rmumps, sdnet
SEXP Rf_asS4(SEXP, Rboolean, int); // Rf_asS4 unused
// asS4 unused
SEXP Rf_S3Class(SEXP); // Rf_S3Class unused
// S3Class used 4 times in RInside, littler
int Rf_isBasicClass(const char *); // Rf_isBasicClass unused
// isBasicClass unused
Rboolean R_cycle_detected(SEXP s, SEXP child); // R_cycle_detected unused
typedef enum {
CE_NATIVE = 0,
CE_UTF8 = 1,
CE_LATIN1 = 2,
CE_BYTES = 3,
CE_SYMBOL = 5,
CE_ANY =99
} cetype_t; // cetype_t used 47 times in 13 packages
cetype_t Rf_getCharCE(SEXP); // Rf_getCharCE used 13 times in RSclient, Rserve, genie, dplyr, Rcpp, rJava, ROracle
// getCharCE used 16 times in ore, RSclient, PythonInR, Rserve, jsonlite, tau, rJava
SEXP Rf_mkCharCE(const char *, cetype_t); // Rf_mkCharCE used 40 times in readxl, mongolite, xml2, readr, Rcpp11, stringi, commonmark, dplyr, Rcpp, haven
// mkCharCE used 72 times in 15 packages
SEXP Rf_mkCharLenCE(const char *, int, cetype_t); // Rf_mkCharLenCE used 68 times in readr, ROracle, stringi
// mkCharLenCE used 23 times in 11 packages
const char *Rf_reEnc(const char *x, cetype_t ce_in, cetype_t ce_out, int subst); // Rf_reEnc used 5 times in RCurl, RSclient, Rserve, rJava
// reEnc used 3 times in PythonInR, RJSONIO
SEXP R_forceAndCall(SEXP e, int n, SEXP rho); // R_forceAndCall unused
SEXP R_MakeExternalPtr(void *p, SEXP tag, SEXP prot); // R_MakeExternalPtr used 321 times in 102 packages
void *R_ExternalPtrAddr(SEXP s); // R_ExternalPtrAddr used 2127 times in 115 packages
SEXP R_ExternalPtrTag(SEXP s); // R_ExternalPtrTag used 195 times in 32 packages
SEXP R_ExternalPtrProtected(SEXP s); // R_ExternalPtrProtected used 6 times in PopGenome, Rcpp, WhopGenome, data.table, Rcpp11
void R_ClearExternalPtr(SEXP s); // R_ClearExternalPtr used 157 times in 64 packages
void R_SetExternalPtrAddr(SEXP s, void *p); // R_SetExternalPtrAddr used 23 times in ff, PopGenome, RCurl, rstream, Rlabkey, WhopGenome, XML, RJSONIO, memisc, ROracle
void R_SetExternalPtrTag(SEXP s, SEXP tag); // R_SetExternalPtrTag used 16 times in PopGenome, rstream, Rlabkey, WhopGenome, Rcpp11, Rcpp, rLindo
void R_SetExternalPtrProtected(SEXP s, SEXP p); // R_SetExternalPtrProtected used 9 times in PopGenome, rstream, Rlabkey, Rcpp, WhopGenome, Rcpp11
typedef void (*R_CFinalizer_t)(SEXP);
void R_RegisterFinalizer(SEXP s, SEXP fun); // R_RegisterFinalizer used 1 times in XML
void R_RegisterCFinalizer(SEXP s, R_CFinalizer_t fun); // R_RegisterCFinalizer used 73 times in 27 packages
void R_RegisterFinalizerEx(SEXP s, SEXP fun, Rboolean onexit); // R_RegisterFinalizerEx unused
void R_RegisterCFinalizerEx(SEXP s, R_CFinalizer_t fun, Rboolean onexit); // R_RegisterCFinalizerEx used 152 times in 58 packages
void R_RunPendingFinalizers(void); // R_RunPendingFinalizers unused
SEXP R_MakeWeakRef(SEXP key, SEXP val, SEXP fin, Rboolean onexit); // R_MakeWeakRef used 4 times in igraph, svd
SEXP R_MakeWeakRefC(SEXP key, SEXP val, R_CFinalizer_t fin, Rboolean onexit); // R_MakeWeakRefC unused
SEXP R_WeakRefKey(SEXP w); // R_WeakRefKey used 3 times in igraph, Rcpp, Rcpp11
SEXP R_WeakRefValue(SEXP w); // R_WeakRefValue used 7 times in igraph, Rcpp, svd, Rcpp11
void R_RunWeakRefFinalizer(SEXP w); // R_RunWeakRefFinalizer used 1 times in igraph
SEXP R_PromiseExpr(SEXP); // R_PromiseExpr unused
SEXP R_ClosureExpr(SEXP); // R_ClosureExpr unused
void R_initialize_bcode(void); // R_initialize_bcode unused
SEXP R_bcEncode(SEXP); // R_bcEncode unused
SEXP R_bcDecode(SEXP); // R_bcDecode unused
Rboolean R_ToplevelExec(void (*fun)(void *), void *data);
SEXP R_ExecWithCleanup(SEXP (*fun)(void *), void *data,
void (*cleanfun)(void *), void *cleandata);
void R_RestoreHashCount(SEXP rho); // R_RestoreHashCount unused
Rboolean R_IsPackageEnv(SEXP rho); // R_IsPackageEnv unused
SEXP R_PackageEnvName(SEXP rho); // R_PackageEnvName unused
SEXP R_FindPackageEnv(SEXP info); // R_FindPackageEnv unused
Rboolean R_IsNamespaceEnv(SEXP rho); // R_IsNamespaceEnv unused
SEXP R_NamespaceEnvSpec(SEXP rho); // R_NamespaceEnvSpec unused
SEXP R_FindNamespace(SEXP info); // R_FindNamespace used 14 times in 11 packages
void R_LockEnvironment(SEXP env, Rboolean bindings); // R_LockEnvironment used 2 times in Rcpp, Rcpp11
Rboolean R_EnvironmentIsLocked(SEXP env); // R_EnvironmentIsLocked used 2 times in Rcpp, Rcpp11
void R_LockBinding(SEXP sym, SEXP env); // R_LockBinding used 3 times in data.table, Rcpp, Rcpp11
void R_unLockBinding(SEXP sym, SEXP env); // R_unLockBinding used 2 times in Rcpp, Rcpp11
void R_MakeActiveBinding(SEXP sym, SEXP fun, SEXP env); // R_MakeActiveBinding unused
Rboolean R_BindingIsLocked(SEXP sym, SEXP env); // R_BindingIsLocked used 2 times in Rcpp, Rcpp11
Rboolean R_BindingIsActive(SEXP sym, SEXP env); // R_BindingIsActive used 2 times in Rcpp, Rcpp11
Rboolean R_HasFancyBindings(SEXP rho); // R_HasFancyBindings unused
void Rf_errorcall(SEXP, const char *, ...) __attribute__((noreturn)); // Rf_errorcall used 27 times in purrr, mongolite, jsonlite, pbdMPI, rJava, openssl
// errorcall used 103 times in RCurl, arules, XML, arulesSequences, pbdMPI, xts, proxy, cba, rJava, RSAP
void Rf_warningcall(SEXP, const char *, ...); // Rf_warningcall used 5 times in pbdMPI, mongolite
// warningcall used 4 times in RInside, jsonlite, pbdMPI
void Rf_warningcall_immediate(SEXP, const char *, ...); // Rf_warningcall_immediate used 2 times in mongolite, V8
// warningcall_immediate used 2 times in Runuran
void R_XDREncodeDouble(double d, void *buf); // R_XDREncodeDouble unused
double R_XDRDecodeDouble(void *buf); // R_XDRDecodeDouble unused
void R_XDREncodeInteger(int i, void *buf); // R_XDREncodeInteger unused
int R_XDRDecodeInteger(void *buf); // R_XDRDecodeInteger unused
typedef void *R_pstream_data_t;
typedef enum {
R_pstream_any_format,
R_pstream_ascii_format,
R_pstream_binary_format,
R_pstream_xdr_format,
R_pstream_asciihex_format
} R_pstream_format_t; // R_pstream_format_t used 7 times in RApiSerialize, Rhpc, fastdigest
typedef struct R_outpstream_st *R_outpstream_t;
struct R_outpstream_st {
R_pstream_data_t data;
R_pstream_format_t type;
int version;
void (*OutChar)(R_outpstream_t, int);
void (*OutBytes)(R_outpstream_t, void *, int);
SEXP (*OutPersistHookFunc)(SEXP, SEXP);
SEXP OutPersistHookData; // OutPersistHookData unused
};
typedef struct R_inpstream_st *R_inpstream_t;
struct R_inpstream_st {
R_pstream_data_t data;
R_pstream_format_t type;
int (*InChar)(R_inpstream_t);
void (*InBytes)(R_inpstream_t, void *, int);
SEXP (*InPersistHookFunc)(SEXP, SEXP);
SEXP InPersistHookData; // InPersistHookData unused
};
void R_InitInPStream(R_inpstream_t stream, R_pstream_data_t data, // R_InitInPStream used 2 times in RApiSerialize, Rhpc
R_pstream_format_t type,
int (*inchar)(R_inpstream_t),
void (*inbytes)(R_inpstream_t, void *, int),
SEXP (*phook)(SEXP, SEXP), SEXP pdata);
void R_InitOutPStream(R_outpstream_t stream, R_pstream_data_t data, // R_InitOutPStream used 4 times in RApiSerialize, Rhpc, fastdigest, qtbase
R_pstream_format_t type, int version,
void (*outchar)(R_outpstream_t, int),
void (*outbytes)(R_outpstream_t, void *, int),
SEXP (*phook)(SEXP, SEXP), SEXP pdata);
void R_InitFileInPStream(R_inpstream_t stream, FILE *fp, // R_InitFileInPStream used 1 times in filehash
R_pstream_format_t type,
SEXP (*phook)(SEXP, SEXP), SEXP pdata);
void R_InitFileOutPStream(R_outpstream_t stream, FILE *fp, // R_InitFileOutPStream unused
R_pstream_format_t type, int version,
SEXP (*phook)(SEXP, SEXP), SEXP pdata);
void R_Serialize(SEXP s, R_outpstream_t ops); // R_Serialize used 4 times in RApiSerialize, Rhpc, fastdigest, qtbase
SEXP R_Unserialize(R_inpstream_t ips); // R_Unserialize used 4 times in RApiSerialize, Rhpc, filehash
SEXP R_do_slot(SEXP obj, SEXP name); // R_do_slot used 162 times in 11 packages
SEXP R_do_slot_assign(SEXP obj, SEXP name, SEXP value); // R_do_slot_assign used 17 times in excel.link, redland, Rcpp11, Matrix, TMB, Rcpp, FREGAT, HiPLARM, OpenMx, rJPSGCS
int R_has_slot(SEXP obj, SEXP name); // R_has_slot used 14 times in Matrix, Rcpp, HiPLARM, OpenMx, Rcpp11
SEXP R_do_MAKE_CLASS(const char *what); // R_do_MAKE_CLASS used 6 times in TMB, Rcpp, Rcpp11
SEXP R_getClassDef (const char *what); // R_getClassDef used 5 times in memisc, Rcpp, Rcpp11
SEXP R_getClassDef_R(SEXP what); // R_getClassDef_R unused
Rboolean R_has_methods_attached(void); // R_has_methods_attached unused
Rboolean R_isVirtualClass(SEXP class_def, SEXP env); // R_isVirtualClass unused
Rboolean R_extends (SEXP class1, SEXP class2, SEXP env); // R_extends unused
SEXP R_do_new_object(SEXP class_def); // R_do_new_object used 9 times in TMB, memisc, Rcpp, Rcpp11
int R_check_class_and_super(SEXP x, const char **valid, SEXP rho); // R_check_class_and_super used 5 times in Matrix, Rmosek, HiPLARM
int R_check_class_etc (SEXP x, const char **valid); // R_check_class_etc used 41 times in Matrix, HiPLARM
void R_PreserveObject(SEXP); // R_PreserveObject used 112 times in 29 packages
void R_ReleaseObject(SEXP); // R_ReleaseObject used 114 times in 27 packages
void R_dot_Last(void); // R_dot_Last used 4 times in RInside, rJava, littler
void R_RunExitFinalizers(void); // R_RunExitFinalizers used 4 times in RInside, TMB, rJava, littler
int R_system(const char *); // R_system used 1 times in rJava
Rboolean R_compute_identical(SEXP, SEXP, int); // R_compute_identical used 14 times in igraph, Matrix, rgp, data.table
void R_orderVector(int *indx, int n, SEXP arglist, Rboolean nalast, Rboolean decreasing); // R_orderVector used 5 times in glpkAPI, nontarget, CEGO
SEXP Rf_allocVector(SEXPTYPE, R_xlen_t); // Rf_allocVector used 1086 times in 59 packages
// allocVector used 12419 times in 551 packages
Rboolean Rf_conformable(SEXP, SEXP); // Rf_conformable unused
// conformable used 141 times in 22 packages
SEXP Rf_elt(SEXP, int); // Rf_elt unused
// elt used 2310 times in 37 packages
Rboolean Rf_inherits(SEXP, const char *); // Rf_inherits used 530 times in 21 packages
// inherits used 814 times in 80 packages
Rboolean Rf_isArray(SEXP); // Rf_isArray unused
// isArray used 34 times in checkmate, PythonInR, data.table, ifultools, Rblpapi, Rvcg, unfoldr, TMB, kza, qtbase
Rboolean Rf_isFactor(SEXP); // Rf_isFactor used 22 times in 11 packages
// isFactor used 42 times in checkmate, rggobi, PythonInR, data.table, Kmisc, partykit, cba, qtbase, RSQLite
Rboolean Rf_isFrame(SEXP); // Rf_isFrame used 1 times in OpenMx
// isFrame used 15 times in checkmate, splusTimeDate, OjaNP, PythonInR, data.table, robfilter
Rboolean Rf_isFunction(SEXP); // Rf_isFunction used 4 times in Rserve, genie, RcppClassic
// isFunction used 274 times in 43 packages
Rboolean Rf_isInteger(SEXP); // Rf_isInteger used 39 times in 14 packages
// isInteger used 402 times in 77 packages
Rboolean Rf_isLanguage(SEXP); // Rf_isLanguage unused
// isLanguage used 63 times in PythonInR, rgp, RandomFields
Rboolean Rf_isList(SEXP); // Rf_isList unused
// isList used 40 times in 11 packages
Rboolean Rf_isMatrix(SEXP); // Rf_isMatrix used 55 times in 16 packages
// isMatrix used 293 times in 65 packages
Rboolean Rf_isNewList(SEXP); // Rf_isNewList used 6 times in Rmosek, RcppClassic
// isNewList used 103 times in 27 packages
Rboolean Rf_isNumber(SEXP); // Rf_isNumber unused
// isNumber used 14 times in PythonInR, readr, stringi, qtbase
Rboolean Rf_isNumeric(SEXP); // Rf_isNumeric used 31 times in Rmosek, gaselect, RcppCNPy, genie, mets, Morpho, rstan, Rcpp, RcppClassic, OpenMx
// isNumeric used 468 times in 49 packages
Rboolean Rf_isPairList(SEXP); // Rf_isPairList unused
// isPairList used 2 times in PythonInR
Rboolean Rf_isPrimitive(SEXP); // Rf_isPrimitive unused
// isPrimitive used 7 times in PythonInR, qtbase
Rboolean Rf_isTs(SEXP); // Rf_isTs unused
// isTs used 2 times in PythonInR
Rboolean Rf_isUserBinop(SEXP); // Rf_isUserBinop unused
// isUserBinop used 2 times in PythonInR
Rboolean Rf_isValidString(SEXP); // Rf_isValidString unused
// isValidString used 26 times in SSN, PythonInR, foreign, pbdMPI, RJSONIO, SASxport
Rboolean Rf_isValidStringF(SEXP); // Rf_isValidStringF unused
// isValidStringF used 2 times in PythonInR
Rboolean Rf_isVector(SEXP); // Rf_isVector used 15 times in RProtoBuf, RcppCNPy, stringi, purrr, RcppClassic, OpenMx, adaptivetau
// isVector used 182 times in 46 packages
Rboolean Rf_isVectorAtomic(SEXP); // Rf_isVectorAtomic used 13 times in agop, tidyr, reshape2, stringi
// isVectorAtomic used 40 times in bit, matrixStats, checkmate, PythonInR, data.table, Matrix, bit64, potts, aster2, qtbase
Rboolean Rf_isVectorList(SEXP); // Rf_isVectorList used 23 times in genie, purrr, RNiftyReg, stringi
// isVectorList used 12 times in RPostgreSQL, spsurvey, PythonInR, stringi, adaptivetau, PCICt, RandomFields
Rboolean Rf_isVectorizable(SEXP); // Rf_isVectorizable unused
// isVectorizable used 3 times in PythonInR, robfilter
SEXP Rf_lang1(SEXP); // Rf_lang1 used 14 times in PopGenome, WhopGenome, nontarget, Rcpp11, purrr, Rcpp, CEGO
// lang1 used 30 times in 11 packages
SEXP Rf_lang2(SEXP, SEXP); // Rf_lang2 used 64 times in 13 packages
// lang2 used 216 times in 75 packages
SEXP Rf_lang3(SEXP, SEXP, SEXP); // Rf_lang3 used 19 times in purrr, RcppDE, Rcpp, lbfgs, emdist, Rcpp11
// lang3 used 107 times in 28 packages
SEXP Rf_lang4(SEXP, SEXP, SEXP, SEXP); // Rf_lang4 used 8 times in lme4, purrr, Rcpp, diversitree, Rcpp11
// lang4 used 65 times in 21 packages
SEXP Rf_lang5(SEXP, SEXP, SEXP, SEXP, SEXP); // Rf_lang5 unused
// lang5 used 11 times in PBSddesolve, GNE, SMC
SEXP Rf_lang6(SEXP, SEXP, SEXP, SEXP, SEXP, SEXP); // Rf_lang6 used 1 times in lme4
// lang6 used 2 times in GNE
SEXP Rf_lastElt(SEXP); // Rf_lastElt unused
// lastElt unused
SEXP Rf_lcons(SEXP, SEXP); // Rf_lcons used 26 times in purrr, rcppbugs, Rcpp, pryr
// lcons used 16 times in rmgarch
R_len_t Rf_length(SEXP); // Rf_length used 662 times in 69 packages
SEXP Rf_list1(SEXP); // Rf_list1 used 1 times in Rcpp
// list1 used 197 times in 11 packages
SEXP Rf_list2(SEXP, SEXP); // Rf_list2 unused
// list2 used 441 times in 12 packages
SEXP Rf_list3(SEXP, SEXP, SEXP); // Rf_list3 unused
// list3 used 72 times in marked, Rdsdp, BH, svd
SEXP Rf_list4(SEXP, SEXP, SEXP, SEXP); // Rf_list4 unused
// list4 used 58 times in igraph, PBSddesolve, Rserve, BH, yaml, treethresh, SMC
SEXP Rf_list5(SEXP, SEXP, SEXP, SEXP, SEXP); // Rf_list5 unused
// list5 used 63 times in Rdsdp, BH
SEXP Rf_listAppend(SEXP, SEXP); // Rf_listAppend unused
// listAppend used 1 times in ore
SEXP Rf_mkNamed(SEXPTYPE, const char **); // Rf_mkNamed used 8 times in Matrix, gmp, RSclient, HiPLARM
// mkNamed used 12 times in RCassandra, coxme, SamplerCompare, survival, JavaGD, DEoptim, qtbase
SEXP Rf_mkString(const char *); // Rf_mkString used 179 times in 24 packages
// mkString used 814 times in 96 packages
int Rf_nlevels(SEXP); // Rf_nlevels unused
// nlevels used 546 times in 26 packages
int Rf_stringPositionTr(SEXP, const char *); // Rf_stringPositionTr unused
// stringPositionTr unused
SEXP Rf_ScalarComplex(Rcomplex); // Rf_ScalarComplex unused
// ScalarComplex unused
SEXP Rf_ScalarInteger(int); // Rf_ScalarInteger used 390 times in 20 packages
// ScalarInteger used 704 times in 88 packages
SEXP Rf_ScalarLogical(int); // Rf_ScalarLogical used 160 times in 20 packages
// ScalarLogical used 450 times in 64 packages
SEXP Rf_ScalarRaw(Rbyte); // Rf_ScalarRaw unused
// ScalarRaw used 4 times in qtbase, RGtk2
SEXP Rf_ScalarReal(double); // Rf_ScalarReal used 146 times in 19 packages
// ScalarReal used 330 times in 65 packages
SEXP Rf_ScalarString(SEXP); // Rf_ScalarString used 33 times in agop, Nippon, Rcpp11, rpf, stringi, purrr, Rcpp
// ScalarString used 198 times in 37 packages
R_xlen_t Rf_xlength(SEXP); // Rf_xlength used 46 times in WGCNA, Rcpp, Rcpp11
SEXP Rf_protect(SEXP); // Rf_protect used 332 times in 12 packages
// protect used 599 times in 101 packages
void Rf_unprotect(int); // Rf_unprotect used 289 times in 12 packages
// unprotect used 110 times in 35 packages
void R_ProtectWithIndex(SEXP, PROTECT_INDEX *); // R_ProtectWithIndex used 8 times in OpenMx
void R_Reprotect(SEXP, PROTECT_INDEX); // R_Reprotect used 2 times in OpenMx
SEXP R_FixupRHS(SEXP x, SEXP y); // R_FixupRHS unused
}
</pre>
== Stats ==
<pre>
0 1 2 3 4 5 6 7 8 9 10+
Macro: 57 9 7 3 4 1 2 3 0 1 98 (usage count)
(185) 57 20 8 2 6 4 4 5 3 4 72 (distinct package count)
Function: 103 14 17 11 14 11 7 6 4 5 186 (usage count)
(378) 103 30 30 20 28 7 9 6 1 5 139 (distinct package count)
Variable: 26 0 4 2 1 0 1 0 0 0 20 (usage count)
(54) 26 4 2 2 0 0 0 1 0 1 18 (distinct package count)
TypeDef: 1 0 0 0 0 0 0 1 0 0 5 (usage count)
(7) 1 0 0 1 0 0 0 0 0 0 5 (distinct package count)
Alias: 43 9 9 8 6 4 0 3 1 1 97 (usage count)
(181) 43 25 14 5 9 1 5 5 3 1 70 (distinct package count)
</pre>
(for a quick explanation of these stats see [[Native_API_stats_of_R.h]])
e4124b1f746332818e5ceb813403dc39c8728e9d
Native API stats of Rinternals.h without USE R INTERNALS
0
8
16
2016-06-20T15:20:08Z
Lukasstadler
8
Created page with "== Input == <pre> #include "Rinternals.h" </pre> == Result == <pre> #define ANYSXP 18 // ANYSXP used 14 times in RPostgreSQL, Rcpp1..."
wikitext
text/x-wiki
== Input ==
<pre>
#include "Rinternals.h"
</pre>
== Result ==
<pre>
#define ANYSXP 18 // ANYSXP used 14 times in RPostgreSQL, Rcpp11, seqminer, Rcpp, pryr, rtkpp, rtkore, RGtk2
#define BCODESXP 21 // BCODESXP used 15 times in rcppbugs, Rcpp11, seqminer, Rcpp, pryr, rtkpp, rtkore
#define BCODE_CODE(x) CAR(x) // BCODE_CODE unused
#define BCODE_CONSTS(x) CDR(x) // BCODE_CONSTS unused
#define BCODE_EXPR(x) TAG(x) // BCODE_EXPR unused
#define BODY_EXPR(e) R_ClosureExpr(e) // BODY_EXPR unused
#define BUILTINSXP 8 // BUILTINSXP used 24 times in 11 packages
#define CHAR(x) R_CHAR(x) // CHAR used 4405 times in 362 packages
#define CHARSXP 9 // CHARSXP used 106 times in 33 packages
#define CLOSXP 3 // CLOSXP used 83 times in 30 packages
#define CONS(a, b) Rf_cons((a), (b)) // CONS used 458 times in 30 packages
#define CPLXSXP 15 // CPLXSXP used 409 times in 49 packages
#define CreateTag Rf_CreateTag // CreateTag used 1 times in rgp
#define DECREMENT_REFCNT(x) do {} while(0) // DECREMENT_REFCNT unused
#define DISABLE_REFCNT(x) do {} while(0) // DISABLE_REFCNT unused
#define DOTSXP 17 // DOTSXP used 16 times in RPostgreSQL, PythonInR, Rcpp11, seqminer, Rcpp, pryr, rtkpp, spikeSlabGAM, rtkore
#define DropDims Rf_DropDims // DropDims unused
#define ENABLE_NLS 1 // ENABLE_NLS used 80 times in 59 packages
#define ENABLE_REFCNT(x) do {} while(0) // ENABLE_REFCNT unused
#define ENVSXP 4 // ENVSXP used 63 times in 25 packages
#define EXPRSXP 20 // EXPRSXP used 84 times in 14 packages
#define EXTPTRSXP 22 // EXTPTRSXP used 386 times in 55 packages
#define EXTPTR_PROT(x) CDR(x) // EXTPTR_PROT used 5 times in rJava, pryr
#define EXTPTR_PTR(x) CAR(x) // EXTPTR_PTR used 428 times in 15 packages
#define EXTPTR_TAG(x) TAG(x) // EXTPTR_TAG used 9 times in excel.link, pryr, rJava, gsl
#define FREESXP 31 // FREESXP used 4 times in rtkpp, rtkore
#define FUNSXP 99 // FUNSXP used 6 times in dplyr, rtkpp, data.table, rtkore
#define GetArrayDimnames Rf_GetArrayDimnames // GetArrayDimnames unused
#define GetColNames Rf_GetColNames // GetColNames unused
#define GetMatrixDimnames Rf_GetMatrixDimnames // GetMatrixDimnames used 2 times in Kmisc, optmatch
#define GetOption Rf_GetOption // GetOption used 5 times in rgl, gmp, Cairo, RGtk2
#define GetOption1 Rf_GetOption1 // GetOption1 used 1 times in PCICt
#define GetOptionDigits Rf_GetOptionDigits // GetOptionDigits unused
#define GetOptionWidth Rf_GetOptionWidth // GetOptionWidth unused
#define GetRowNames Rf_GetRowNames // GetRowNames unused
#define HAVE_ALLOCA_H 1 // HAVE_ALLOCA_H used 15 times in treatSens, Matrix, TMB, pbdZMQ, ore, dbarts
#define HAVE_AQUA 1 // HAVE_AQUA used 13 times in 11 packages
#define HAVE_F77_UNDERSCORE 1 // HAVE_F77_UNDERSCORE used 2 times in igraph
#define IEEE_754 1 // IEEE_754 used 47 times in igraph, Rcpp, data.table, stringi
#define INCREMENT_NAMED(x) do { SEXP __x__ = (x); if (NAMED(__x__) != 2) SET_NAMED(__x__, NAMED(__x__) + 1); } while (0) // INCREMENT_NAMED unused
#define INCREMENT_REFCNT(x) do {} while(0) // INCREMENT_REFCNT unused
#define INLINE_PROTECT // INLINE_PROTECT unused
#define INTSXP 13 // INTSXP used 6341 times in 471 packages
#define ISNA(x) R_IsNA(x) // ISNA used 649 times in 100 packages
#define ISNAN(x) R_isnancpp(x) // ISNAN used 1342 times in 146 packages
#define IS_GETTER_CALL(call) (CADR(call) == R_TmpvalSymbol) // IS_GETTER_CALL unused
#define IS_SCALAR(x, type) (TYPEOF(x) == (type) && XLENGTH(x) == 1) // IS_SCALAR unused
#define IS_SIMPLE_SCALAR(x, type) ((TYPEOF(x) == (type) && XLENGTH(x) == 1) && ATTRIB(x) == R_NilValue) // IS_SIMPLE_SCALAR unused
#define IndexWidth Rf_IndexWidth // IndexWidth unused
#define LANGSXP 6 // LANGSXP used 1276 times in 53 packages
#define LCONS(a, b) Rf_lcons((a), (b)) // LCONS used 212 times in 24 packages
#define LGLSXP 10 // LGLSXP used 1430 times in 166 packages
#define LISTSXP 2 // LISTSXP used 87 times in 21 packages
#define LONG_VECTOR_SUPPORT // LONG_VECTOR_SUPPORT used 56 times in stringdist, matrixStats, RApiSerialize, Rhpc, pbdMPI, Rcpp11, Matrix
#define LibExport // LibExport used 2 times in hsmm
#define LibExtern extern // LibExtern used 4 times in rJava
#define LibImport // LibImport unused
#define MARK_NOT_MUTABLE(x) SET_NAMED(x, 2) // MARK_NOT_MUTABLE unused
#define MAX_NUM_SEXPTYPE (1<<5) // MAX_NUM_SEXPTYPE unused
#define MAYBE_REFERENCED(x) (! (NAMED(x) == 0)) // MAYBE_REFERENCED unused
#define MAYBE_SHARED(x) (NAMED(x) > 1) // MAYBE_SHARED unused
#define NAMEDMAX 2 // NAMEDMAX unused
#define NA_INTEGER R_NaInt // NA_INTEGER used 1520 times in 183 packages
#define NA_LOGICAL R_NaInt // NA_LOGICAL used 355 times in 73 packages
#define NA_REAL R_NaReal // NA_REAL used 1667 times in 226 packages
#define NA_STRING R_NaString // NA_STRING used 574 times in 90 packages
#define NEWSXP 30 // NEWSXP used 4 times in rtkpp, rtkore
#define NILSXP 0 // NILSXP used 169 times in 44 packages
#define NORET __attribute__((noreturn)) // NORET unused
#define NOT_SHARED(x) (! (NAMED(x) > 1)) // NOT_SHARED unused
#define NO_REFERENCES(x) (NAMED(x) == 0) // NO_REFERENCES unused
#define NonNullStringMatch Rf_NonNullStringMatch // NonNullStringMatch used 8 times in proxy, arules, arulesSequences, cba
#define PREXPR(e) R_PromiseExpr(e) // PREXPR used 4 times in igraph, lazyeval
#define PROMSXP 5 // PROMSXP used 43 times in 14 packages
#define PROTECT(s) Rf_protect(s) // PROTECT used 24686 times in 767 packages
#define PROTECT_WITH_INDEX(x,i) R_ProtectWithIndex(x,i) // PROTECT_WITH_INDEX used 91 times in 27 packages
#define PairToVectorList Rf_PairToVectorList // PairToVectorList used 7 times in cba, rcdd
#define PrintValue Rf_PrintValue // PrintValue used 119 times in 13 packages
#define RAWSXP 24 // RAWSXP used 587 times in 92 packages
#define REALSXP 14 // REALSXP used 10171 times in 573 packages
#define REPROTECT(x,i) R_Reprotect(x,i) // REPROTECT used 130 times in 25 packages
#define R_ALLOCATOR_TYPE // R_ALLOCATOR_TYPE unused
#define R_ARITH_H_ // R_ARITH_H_ unused
#define R_COMPLEX_H // R_COMPLEX_H used 1 times in uniqueAtomMat
#define R_ERROR_H_ // R_ERROR_H_ unused
#define R_EXT_BOOLEAN_H_ // R_EXT_BOOLEAN_H_ used 2 times in jpeg, Rcpp11
#define R_EXT_MEMORY_H_ // R_EXT_MEMORY_H_ unused
#define R_EXT_PRINT_H_ // R_EXT_PRINT_H_ used 6 times in spTDyn, spTimer
#define R_EXT_UTILS_H_ // R_EXT_UTILS_H_ unused
#define R_FINITE(x) R_finite(x) // R_FINITE used 1387 times in 145 packages
#define R_INLINE inline // R_INLINE used 330 times in 34 packages
#define R_INTERNALS_H_ // R_INTERNALS_H_ used 7 times in uniqueAtomMat, rtkpp, rtkore, spatstat
#define R_LEN_T_MAX 2147483647 // R_LEN_T_MAX used 4 times in stringdist, matrixStats, FREGAT, Rcpp11
#define R_LONG_VEC_TOKEN -1 // R_LONG_VEC_TOKEN used 1 times in Rcpp11
#define R_RCONFIG_H // R_RCONFIG_H unused
#define R_SHORT_LEN_MAX 2147483647 // R_SHORT_LEN_MAX used 1 times in pbdMPI
#define R_XDR_DOUBLE_SIZE 8 // R_XDR_DOUBLE_SIZE used 2 times in rgdal
#define R_XDR_INTEGER_SIZE 4 // R_XDR_INTEGER_SIZE used 3 times in rgdal
#define R_XLEN_T_MAX 4503599627370496 // R_XLEN_T_MAX used 7 times in stringdist, Matrix, matrixStats, RApiSerialize, Rhpc
#define S3Class Rf_S3Class // S3Class used 4 times in RInside, littler
#define S4SXP 25 // S4SXP used 71 times in 15 packages
#define SET_REFCNT(x,v) do {} while(0) // SET_REFCNT unused
#define SET_TRACKREFS(x,v) do {} while(0) // SET_TRACKREFS unused
#define SIZEOF_SIZE_T 8 // SIZEOF_SIZE_T used 1 times in PythonInR
#define SPECIALSXP 7 // SPECIALSXP used 22 times in RPostgreSQL, PythonInR, Rcpp11, purrr, seqminer, Rcpp, yaml, pryr, rtkpp, rtkore
#define STRSXP 16 // STRSXP used 3247 times in 327 packages
#define SUPPORT_MBCS 1 // SUPPORT_MBCS used 1 times in bibtex
#define SUPPORT_UTF8 1 // SUPPORT_UTF8 used 3 times in tau, rindex, stringi
#define SYMSXP 1 // SYMSXP used 94 times in 25 packages
#define ScalarComplex Rf_ScalarComplex // ScalarComplex unused
#define ScalarInteger Rf_ScalarInteger // ScalarInteger used 704 times in 88 packages
#define ScalarLogical Rf_ScalarLogical // ScalarLogical used 450 times in 64 packages
#define ScalarRaw Rf_ScalarRaw // ScalarRaw used 4 times in qtbase, RGtk2
#define ScalarReal Rf_ScalarReal // ScalarReal used 330 times in 65 packages
#define ScalarString Rf_ScalarString // ScalarString used 198 times in 37 packages
#define StringBlank Rf_StringBlank // StringBlank unused
#define StringFalse Rf_StringFalse // StringFalse used 3 times in iotools
#define StringTrue Rf_StringTrue // StringTrue used 3 times in iotools
#define TYPE_BITS 5 // TYPE_BITS used 2 times in dplyr
#define UNPROTECT(n) Rf_unprotect(n) // UNPROTECT used 12247 times in 758 packages
#define UNPROTECT_PTR(s) Rf_unprotect_ptr(s) // UNPROTECT_PTR used 307 times in 14 packages
#define VECSXP 19 // VECSXP used 3142 times in 385 packages
#define VectorToPairList Rf_VectorToPairList // VectorToPairList used 13 times in pomp, arules
#define WEAKREFSXP 23 // WEAKREFSXP used 19 times in seqminer, Rcpp, pryr, rtkpp, rtkore, Rcpp11
#define acopy_string Rf_acopy_string // acopy_string used 10 times in splusTimeDate
#define addMissingVarsToNewEnv Rf_addMissingVarsToNewEnv // addMissingVarsToNewEnv unused
#define alloc3DArray Rf_alloc3DArray // alloc3DArray used 21 times in mcmc, msm, TPmsm, unfoldr, RandomFields, cplm
#define allocArray Rf_allocArray // allocArray used 24 times in unfoldr, kergp, pomp, proxy, kza, slam, mvMORPH, TPmsm, ouch, RandomFields
#define allocFormalsList2 Rf_allocFormalsList2 // allocFormalsList2 unused
#define allocFormalsList3 Rf_allocFormalsList3 // allocFormalsList3 unused
#define allocFormalsList4 Rf_allocFormalsList4 // allocFormalsList4 unused
#define allocFormalsList5 Rf_allocFormalsList5 // allocFormalsList5 unused
#define allocFormalsList6 Rf_allocFormalsList6 // allocFormalsList6 unused
#define allocList Rf_allocList // allocList used 60 times in 25 packages
#define allocMatrix Rf_allocMatrix // allocMatrix used 1577 times in 244 packages
#define allocS4Object Rf_allocS4Object // allocS4Object used 1 times in arules
#define allocSExp Rf_allocSExp // allocSExp used 14 times in igraph, rgp, data.table, RandomFields, mmap, qtbase
#define allocVector Rf_allocVector // allocVector used 12419 times in 551 packages
#define allocVector3 Rf_allocVector3 // allocVector3 unused
#define any_duplicated Rf_any_duplicated // any_duplicated used 5 times in data.table, checkmate
#define any_duplicated3 Rf_any_duplicated3 // any_duplicated3 unused
#define applyClosure Rf_applyClosure // applyClosure unused
#define arraySubscript Rf_arraySubscript // arraySubscript used 13 times in proxy, arules, arulesSequences, cba, seriation
#define asChar Rf_asChar // asChar used 194 times in 36 packages
#define asCharacterFactor Rf_asCharacterFactor // asCharacterFactor used 11 times in fastmatch, Kmisc, data.table
#define asComplex Rf_asComplex // asComplex used 1 times in ff
#define asInteger Rf_asInteger // asInteger used 1277 times in 140 packages
#define asLogical Rf_asLogical // asLogical used 462 times in 64 packages
#define asReal Rf_asReal // asReal used 383 times in 83 packages
#define asS4 Rf_asS4 // asS4 unused
#define cPsort Rf_cPsort // cPsort unused
#define classgets Rf_classgets // classgets used 91 times in 30 packages
#define coerceVector Rf_coerceVector // coerceVector used 2585 times in 167 packages
#define conformable Rf_conformable // conformable used 141 times in 22 packages
#define cons Rf_cons // cons used 609 times in 39 packages
#define copyListMatrix Rf_copyListMatrix // copyListMatrix used 1 times in Matrix
#define copyMatrix Rf_copyMatrix // copyMatrix used 7 times in BDgraph, Matrix, kza
#define copyMostAttrib Rf_copyMostAttrib // copyMostAttrib used 68 times in arules, robustbase, data.table, xts, memisc, proxy, zoo, tau
#define copyVector Rf_copyVector // copyVector used 12 times in tm, kza, mlegp, adaptivetau
#define countContexts Rf_countContexts // countContexts unused
#define defineVar Rf_defineVar // defineVar used 218 times in 38 packages
#define dimgets Rf_dimgets // dimgets used 3 times in CorrBin
#define dimnamesgets Rf_dimnamesgets // dimnamesgets used 24 times in Matrix, RxCEcolInf, lxb, sapa
#define duplicate Rf_duplicate // duplicate used 2088 times in 224 packages
#define duplicated Rf_duplicated // duplicated used 402 times in 100 packages
#define elt Rf_elt // elt used 2310 times in 37 packages
#define error Rf_error // error used 63771 times in 1109 packages
#define error_return(msg) { Rf_error(msg); return R_NilValue; } // error_return used 100 times in rpg, RPostgreSQL, Rook, git2r, grr, rJava, rmumps
#define errorcall Rf_errorcall // errorcall used 103 times in RCurl, arules, XML, arulesSequences, pbdMPI, xts, proxy, cba, rJava, RSAP
#define errorcall_return(cl,msg) { Rf_errorcall(cl, msg); return R_NilValue; } // errorcall_return used 31 times in Runuran
#define eval Rf_eval // eval used 25178 times in 269 packages
#define findFun Rf_findFun // findFun used 13 times in sprint, tikzDevice, yaml, unfoldr, TraMineR, RGtk2
#define findVar Rf_findVar // findVar used 1333 times in 24 packages
#define findVarInFrame Rf_findVarInFrame // findVarInFrame used 101 times in 13 packages
#define findVarInFrame3 Rf_findVarInFrame3 // findVarInFrame3 used 5 times in datamap
#define getAttrib Rf_getAttrib // getAttrib used 1930 times in 239 packages
#define getCharCE Rf_getCharCE // getCharCE used 16 times in ore, RSclient, PythonInR, Rserve, jsonlite, tau, rJava
#define gsetVar Rf_gsetVar // gsetVar used 4 times in RSVGTipsDevice, Cairo, RSvgDevice, JavaGD
#define iPsort Rf_iPsort // iPsort used 3 times in matrixStats, robustbase
#define inherits Rf_inherits // inherits used 814 times in 80 packages
#define install Rf_install // install used 3178 times in 224 packages
#define installChar Rf_installChar // installChar used 4 times in dplyr
#define installDDVAL Rf_installDDVAL // installDDVAL unused
#define installS3Signature Rf_installS3Signature // installS3Signature unused
#define isArray Rf_isArray // isArray used 34 times in checkmate, PythonInR, data.table, ifultools, Rblpapi, Rvcg, unfoldr, TMB, kza, qtbase
#define isBasicClass Rf_isBasicClass // isBasicClass unused
#define isBlankString Rf_isBlankString // isBlankString used 1 times in iotools
#define isByteCode(x) (TYPEOF(x)==21) // isByteCode unused
#define isComplex(s) Rf_isComplex(s) // isComplex used 119 times in checkmate, PythonInR, ifultools, Rblpapi, Rcpp11, rmatio, stringi, Matrix, qtbase
#define isEnvironment(s) Rf_isEnvironment(s) // isEnvironment used 113 times in 52 packages
#define isExpression(s) Rf_isExpression(s) // isExpression used 3 times in PythonInR, Rcpp11
#define isFactor Rf_isFactor // isFactor used 42 times in checkmate, rggobi, PythonInR, data.table, Kmisc, partykit, cba, qtbase, RSQLite
#define isFrame Rf_isFrame // isFrame used 15 times in checkmate, splusTimeDate, OjaNP, PythonInR, data.table, robfilter
#define isFree Rf_isFree // isFree unused
#define isFunction Rf_isFunction // isFunction used 274 times in 43 packages
#define isInteger Rf_isInteger // isInteger used 402 times in 77 packages
#define isLanguage Rf_isLanguage // isLanguage used 63 times in PythonInR, rgp, RandomFields
#define isList Rf_isList // isList used 40 times in 11 packages
#define isLogical(s) Rf_isLogical(s) // isLogical used 215 times in 53 packages
#define isMatrix Rf_isMatrix // isMatrix used 293 times in 65 packages
#define isNewList Rf_isNewList // isNewList used 103 times in 27 packages
#define isNull(s) Rf_isNull(s) // isNull used 1915 times in 119 packages
#define isNumber Rf_isNumber // isNumber used 14 times in PythonInR, readr, stringi, qtbase
#define isNumeric Rf_isNumeric // isNumeric used 468 times in 49 packages
#define isObject(s) Rf_isObject(s) // isObject used 11 times in dplyr, Rcpp, PythonInR, Rcpp11, stringi, rmumps
#define isOrdered Rf_isOrdered // isOrdered used 65 times in partykit, PythonInR, data.table, RSQLite
#define isPairList Rf_isPairList // isPairList used 2 times in PythonInR
#define isPrimitive Rf_isPrimitive // isPrimitive used 7 times in PythonInR, qtbase
#define isReal(s) Rf_isReal(s) // isReal used 323 times in 64 packages
#define isS4 Rf_isS4 // isS4 used 13 times in PythonInR, Rcpp11, dplyr, Rcpp, catnet, rmumps, sdnet
#define isString(s) Rf_isString(s) // isString used 280 times in 59 packages
#define isSymbol(s) Rf_isSymbol(s) // isSymbol used 68 times in PythonInR, data.table, Rcpp11, stringi, rgp, dbarts, rJava, sourcetools
#define isTs Rf_isTs // isTs used 2 times in PythonInR
#define isUnordered Rf_isUnordered // isUnordered used 2 times in PythonInR
#define isUnsorted Rf_isUnsorted // isUnsorted unused
#define isUserBinop Rf_isUserBinop // isUserBinop used 2 times in PythonInR
#define isValidString Rf_isValidString // isValidString used 26 times in SSN, PythonInR, foreign, pbdMPI, RJSONIO, SASxport
#define isValidStringF Rf_isValidStringF // isValidStringF used 2 times in PythonInR
#define isVector Rf_isVector // isVector used 182 times in 46 packages
#define isVectorAtomic Rf_isVectorAtomic // isVectorAtomic used 40 times in bit, matrixStats, checkmate, PythonInR, data.table, Matrix, bit64, potts, aster2, qtbase
#define isVectorList Rf_isVectorList // isVectorList used 12 times in RPostgreSQL, spsurvey, PythonInR, stringi, adaptivetau, PCICt, RandomFields
#define isVectorizable Rf_isVectorizable // isVectorizable used 3 times in PythonInR, robfilter
#define lang1 Rf_lang1 // lang1 used 30 times in 11 packages
#define lang2 Rf_lang2 // lang2 used 216 times in 75 packages
#define lang3 Rf_lang3 // lang3 used 107 times in 28 packages
#define lang4 Rf_lang4 // lang4 used 65 times in 21 packages
#define lang5 Rf_lang5 // lang5 used 11 times in PBSddesolve, GNE, SMC
#define lang6 Rf_lang6 // lang6 used 2 times in GNE
#define lastElt Rf_lastElt // lastElt unused
#define lazy_duplicate Rf_lazy_duplicate // lazy_duplicate unused
#define lcons Rf_lcons // lcons used 16 times in rmgarch
#define length(x) Rf_length(x) // length used 44060 times in 1224 packages
#define lengthgets Rf_lengthgets // lengthgets used 47 times in 11 packages
#define list1 Rf_list1 // list1 used 197 times in 11 packages
#define list2 Rf_list2 // list2 used 441 times in 12 packages
#define list3 Rf_list3 // list3 used 72 times in marked, Rdsdp, BH, svd
#define list4 Rf_list4 // list4 used 58 times in igraph, PBSddesolve, Rserve, BH, yaml, treethresh, SMC
#define list5 Rf_list5 // list5 used 63 times in Rdsdp, BH
#define listAppend Rf_listAppend // listAppend used 1 times in ore
#define match Rf_match // match used 8773 times in 388 packages
#define matchE Rf_matchE // matchE unused
#define mkChar Rf_mkChar // mkChar used 4545 times in 287 packages
#define mkCharCE Rf_mkCharCE // mkCharCE used 72 times in 15 packages
#define mkCharLen Rf_mkCharLen // mkCharLen used 38 times in 16 packages
#define mkCharLenCE Rf_mkCharLenCE // mkCharLenCE used 23 times in 11 packages
#define mkNamed Rf_mkNamed // mkNamed used 12 times in RCassandra, coxme, SamplerCompare, survival, JavaGD, DEoptim, qtbase
#define mkString Rf_mkString // mkString used 814 times in 96 packages
#define namesgets Rf_namesgets // namesgets used 80 times in 14 packages
#define ncols Rf_ncols // ncols used 3805 times in 182 packages
#define nlevels Rf_nlevels // nlevels used 546 times in 26 packages
#define nrows Rf_nrows // nrows used 4332 times in 215 packages
#define nthcdr Rf_nthcdr // nthcdr used 9 times in sprint, rmongodb, PythonInR, xts
#define pmatch Rf_pmatch // pmatch used 169 times in ore, git2r, AdaptFitOS, data.table, seqminer, locfit, oce, rmumps
#define protect Rf_protect // protect used 599 times in 101 packages
#define psmatch Rf_psmatch // psmatch used 5 times in rgl
#define rPsort Rf_rPsort // rPsort used 63 times in 15 packages
#define reEnc Rf_reEnc // reEnc used 3 times in PythonInR, RJSONIO
#define readS3VarsFromFrame Rf_readS3VarsFromFrame // readS3VarsFromFrame unused
#define revsort Rf_revsort // revsort used 60 times in 20 packages
#define rownamesgets Rf_rownamesgets // rownamesgets unused
#define setAttrib Rf_setAttrib // setAttrib used 1830 times in 251 packages
#define setIVector Rf_setIVector // setIVector unused
#define setRVector Rf_setRVector // setRVector used 3 times in RcppClassic, RcppClassicExamples
#define setSVector Rf_setSVector // setSVector unused
#define setVar Rf_setVar // setVar used 24 times in Rhpc, rscproxy, PythonInR, rgenoud, survival, gsl, littler, spatstat
#define shallow_duplicate Rf_shallow_duplicate // shallow_duplicate used 2 times in tmlenet, smint
#define str2type Rf_str2type // str2type used 1 times in RGtk2
#define stringPositionTr Rf_stringPositionTr // stringPositionTr unused
#define stringSuffix Rf_stringSuffix // stringSuffix unused
#define substitute Rf_substitute // substitute used 255 times in 56 packages
#define topenv Rf_topenv // topenv unused
#define translateChar Rf_translateChar // translateChar used 59 times in 19 packages
#define translateChar0 Rf_translateChar0 // translateChar0 unused
#define translateCharUTF8 Rf_translateCharUTF8 // translateCharUTF8 used 66 times in 13 packages
#define type2char Rf_type2char // type2char used 107 times in 12 packages
#define type2rstr Rf_type2rstr // type2rstr unused
#define type2str Rf_type2str // type2str used 3 times in Kmisc, yaml
#define type2str_nowarn Rf_type2str_nowarn // type2str_nowarn used 1 times in qrmtools
#define unprotect Rf_unprotect // unprotect used 110 times in 35 packages
#define unprotect_ptr Rf_unprotect_ptr // unprotect_ptr unused
#define warning Rf_warning // warning used 7679 times in 434 packages
#define warningcall Rf_warningcall // warningcall used 4 times in RInside, jsonlite, pbdMPI
#define warningcall_immediate Rf_warningcall_immediate // warningcall_immediate used 2 times in Runuran
#define xlength(x) Rf_xlength(x) // xlength used 186 times in stringdist, yuima, matrixStats, Rhpc, validate, checkmate, dplR, Rdsdp, pscl, DescTools
#define xlengthgets Rf_xlengthgets // xlengthgets unused
# /Users/ls/Source/git/fastr/com.oracle.truffle.r.native/include/R_ext/Arith.h
extern "C" {
extern double R_NaN; // R_NaN used 469 times in 68 packages
extern double R_PosInf; // R_PosInf used 562 times in 112 packages
extern double R_NegInf; // R_NegInf used 699 times in 105 packages
extern double R_NaReal; // R_NaReal used 140 times in 34 packages
// NA_REAL used 1667 times in 226 packages
extern int R_NaInt; // R_NaInt used 58 times in 20 packages
// NA_INTEGER used 1520 times in 183 packages
// NA_LOGICAL used 355 times in 73 packages
int R_IsNA(double); // R_IsNA used 161 times in 40 packages
int R_IsNaN(double); // R_IsNaN used 75 times in 28 packages
int R_finite(double); // R_finite used 232 times in 44 packages
int R_isnancpp(double); // R_isnancpp used 8 times in igraph, PwrGSD
}
# /Users/ls/Source/git/fastr/com.oracle.truffle.r.native/include/R_ext/Boolean.h
extern "C" {
typedef enum { FALSE = 0, TRUE } Rboolean;
}
# /Users/ls/Source/git/fastr/com.oracle.truffle.r.native/include/R_ext/Complex.h
extern "C" {
typedef struct {
double r;
double i;
} Rcomplex; // Rcomplex used 893 times in 47 packages
}
# /Users/ls/Source/git/fastr/com.oracle.truffle.r.native/include/R_ext/Error.h
extern "C" {
void __attribute__((noreturn)) Rf_error(const char *, ...);
void __attribute__((noreturn)) UNIMPLEMENTED(const char *);
void __attribute__((noreturn)) WrongArgCount(const char *);
void Rf_warning(const char *, ...); // Rf_warning used 316 times in 66 packages
// warning used 7679 times in 434 packages
void R_ShowMessage(const char *s); // R_ShowMessage used 104 times in Rserve, rJava, HiPLARM
}
# /Users/ls/Source/git/fastr/com.oracle.truffle.r.native/include/R_ext/Memory.h
extern "C" {
void* vmaxget(void); // vmaxget used 279 times in 20 packages
void vmaxset(const void *); // vmaxset used 279 times in 20 packages
void R_gc(void); // R_gc used 6 times in TMB, excel.link, gmatrix, microbenchmark
int R_gc_running(); // R_gc_running unused
char* R_alloc(size_t, int); // R_alloc used 7787 times in 330 packages
long double *R_allocLD(size_t nelem);
char* S_alloc(long, int); // S_alloc used 540 times in 50 packages
char* S_realloc(char *, long, long, int); // S_realloc used 55 times in 11 packages
}
# /Users/ls/Source/git/fastr/com.oracle.truffle.r.native/include/R_ext/Print.h
extern "C" {
void Rprintf(const char *, ...); // Rprintf used 33813 times in 729 packages
void REprintf(const char *, ...); // REprintf used 2531 times in 135 packages
}
# /Users/ls/Source/git/fastr/com.oracle.truffle.r.native/include/R_ext/Utils.h
extern "C" {
void R_isort(int*, int); // R_isort used 45 times in 18 packages
void R_rsort(double*, int); // R_rsort used 210 times in 29 packages
void R_csort(Rcomplex*, int); // R_csort unused
void rsort_with_index(double *, int *, int); // rsort_with_index used 77 times in 40 packages
void Rf_revsort(double*, int*, int); // Rf_revsort unused
// revsort used 60 times in 20 packages
void Rf_iPsort(int*, int, int); // Rf_iPsort unused
// iPsort used 3 times in matrixStats, robustbase
void Rf_rPsort(double*, int, int); // Rf_rPsort unused
// rPsort used 63 times in 15 packages
void Rf_cPsort(Rcomplex*, int, int); // Rf_cPsort unused
// cPsort unused
void R_qsort (double *v, size_t i, size_t j); // R_qsort used 10 times in extWeibQuant, pomp, robustbase, dplR, tclust, pcaPP
void R_qsort_I (double *v, int *II, int i, int j); // R_qsort_I used 33 times in 15 packages
void R_qsort_int (int *iv, size_t i, size_t j); // R_qsort_int unused
void R_qsort_int_I(int *iv, int *II, int i, int j); // R_qsort_int_I used 19 times in ff, matrixStats, arules, Rborist, slam, eco, bnlearn
const char *R_ExpandFileName(const char *); // R_ExpandFileName used 42 times in 20 packages
void Rf_setIVector(int*, int, int); // Rf_setIVector unused
// setIVector unused
void Rf_setRVector(double*, int, double); // Rf_setRVector unused
// setRVector used 3 times in RcppClassic, RcppClassicExamples
Rboolean Rf_StringFalse(const char *); // Rf_StringFalse unused
// StringFalse used 3 times in iotools
Rboolean Rf_StringTrue(const char *); // Rf_StringTrue unused
// StringTrue used 3 times in iotools
Rboolean Rf_isBlankString(const char *); // Rf_isBlankString unused
// isBlankString used 1 times in iotools
double R_atof(const char *str); // R_atof used 9 times in SSN, tree, foreign, iotools
double R_strtod(const char *c, char **end); // R_strtod used 4 times in ape, iotools
char *R_tmpnam(const char *prefix, const char *tempdir); // R_tmpnam used 2 times in geometry
char *R_tmpnam2(const char *prefix, const char *tempdir, const char *fileext); // R_tmpnam2 unused
void R_CheckUserInterrupt(void); // R_CheckUserInterrupt used 1487 times in 234 packages
void R_CheckStack(void); // R_CheckStack used 115 times in vcrpart, actuar, cplm, lme4, Matrix, GNE, randtoolbox, HiPLARM, rngWELL, pedigreemm
void R_CheckStack2(size_t); // R_CheckStack2 unused
int findInterval(double *xt, int n, double x, // findInterval used 11 times in BSquare, DNAprofiles, unfoldr, chebpol, pomp, eco, protViz, PBSmapping, spatstat
Rboolean rightmost_closed, Rboolean all_inside, int ilo,
int *mflag);
void find_interv_vec(double *xt, int *n, double *x, int *nx, // find_interv_vec unused
int *rightmost_closed, int *all_inside, int *indx);
void R_max_col(double *matrix, int *nr, int *nc, int *maxes, int *ties_meth); // R_max_col used 2 times in geostatsp, MNP
}
# /Users/ls/Source/git/fastr/com.oracle.truffle.r.native/include/Rinternals.h
extern "C" {
typedef unsigned char Rbyte;
typedef int R_len_t; // R_len_t used 2397 times in 70 packages
typedef ptrdiff_t R_xlen_t; // R_xlen_t used 1537 times in 32 packages
typedef struct { R_xlen_t lv_length, lv_truelength; } R_long_vec_hdr_t;
typedef unsigned int SEXPTYPE;
typedef struct SEXPREC *SEXP;
const char *(R_CHAR)(SEXP x); // R_CHAR used 7 times in OpenMx, rjags, rpf
Rboolean (Rf_isNull)(SEXP s); // Rf_isNull used 275 times in 43 packages
Rboolean (Rf_isSymbol)(SEXP s); // Rf_isSymbol used 4 times in agop, PopGenome, WhopGenome, Rcpp11
Rboolean (Rf_isLogical)(SEXP s); // Rf_isLogical used 16 times in agop, Rmosek, Rcpp11, mets, seqminer, Rcpp, RcppClassic
Rboolean (Rf_isReal)(SEXP s); // Rf_isReal used 26 times in agop, Rmosek, Rcpp11, rpf, genie, lme4, seqminer, RcppClassic, OpenMx
Rboolean (Rf_isComplex)(SEXP s); // Rf_isComplex used 3 times in mets, Rcpp11
Rboolean (Rf_isExpression)(SEXP s); // Rf_isExpression used 1 times in Rcpp11
Rboolean (Rf_isEnvironment)(SEXP s); // Rf_isEnvironment used 5 times in Rcpp, Rcpp11
Rboolean (Rf_isString)(SEXP s); // Rf_isString used 39 times in 13 packages
Rboolean (Rf_isObject)(SEXP s); // Rf_isObject used 4 times in genie, Rcpp, Rcpp11
SEXP (ATTRIB)(SEXP x); // ATTRIB used 83 times in 20 packages
int (OBJECT)(SEXP x); // OBJECT used 102 times in 28 packages
int (MARK)(SEXP x); // MARK used 251 times in 21 packages
int (TYPEOF)(SEXP x); // TYPEOF used 2832 times in 195 packages
int (NAMED)(SEXP x); // NAMED used 62 times in 22 packages
int (REFCNT)(SEXP x); // REFCNT unused
void (SET_OBJECT)(SEXP x, int v); // SET_OBJECT used 32 times in RSclient, reshape2, Rserve, data.table, actuar, dplyr, proxy, rmongodb, slam, tau
void (SET_TYPEOF)(SEXP x, int v); // SET_TYPEOF used 38 times in 21 packages
void (SET_NAMED)(SEXP x, int v); // SET_NAMED used 10 times in dplyr, yaml, data.table, iotools, RSQLite
void SET_ATTRIB(SEXP x, SEXP v); // SET_ATTRIB used 54 times in 18 packages
void DUPLICATE_ATTRIB(SEXP to, SEXP from); // DUPLICATE_ATTRIB used 5 times in covr, lfe, testthat, data.table
int (IS_S4_OBJECT)(SEXP x); // IS_S4_OBJECT used 23 times in Rmosek, Runuran, data.table, xts, Matrix, slam, zoo, HiPLARM, OpenMx, tau
void (SET_S4_OBJECT)(SEXP x); // SET_S4_OBJECT used 12 times in RSclient, redland, Rserve, data.table, FREGAT, rJPSGCS, tau
void (UNSET_S4_OBJECT)(SEXP x); // UNSET_S4_OBJECT used 2 times in data.table, slam
int (LENGTH)(SEXP x); // LENGTH used 5845 times in 356 packages
int (TRUELENGTH)(SEXP x); // TRUELENGTH used 37 times in data.table
void (SETLENGTH)(SEXP x, int v); // SETLENGTH used 65 times in 11 packages
void (SET_TRUELENGTH)(SEXP x, int v); // SET_TRUELENGTH used 26 times in data.table
R_xlen_t (XLENGTH)(SEXP x); // XLENGTH used 287 times in 21 packages
R_xlen_t (XTRUELENGTH)(SEXP x); // XTRUELENGTH unused
int (IS_LONG_VEC)(SEXP x); // IS_LONG_VEC used 1 times in RProtoBuf
int (LEVELS)(SEXP x); // LEVELS used 18 times in rtdists, rPref, BsMD, data.table, stringi, dplyr, OBsMD, pbdZMQ, astrochron, RandomFields
int (SETLEVELS)(SEXP x, int v); // SETLEVELS used 2 times in Rcpp11
int *(LOGICAL)(SEXP x); // LOGICAL used 4473 times in 288 packages
int *(INTEGER)(SEXP x); // INTEGER used 41659 times in 758 packages
Rbyte *(RAW)(SEXP x); // RAW used 880 times in 99 packages
double *(REAL)(SEXP x); // REAL used 30947 times in 687 packages
Rcomplex *(COMPLEX)(SEXP x); // COMPLEX used 1697 times in 71 packages
SEXP (STRING_ELT)(SEXP x, R_xlen_t i); // STRING_ELT used 4143 times in 333 packages
SEXP (VECTOR_ELT)(SEXP x, R_xlen_t i); // VECTOR_ELT used 8626 times in 291 packages
void SET_STRING_ELT(SEXP x, R_xlen_t i, SEXP v); // SET_STRING_ELT used 5834 times in 321 packages
SEXP SET_VECTOR_ELT(SEXP x, R_xlen_t i, SEXP v); // SET_VECTOR_ELT used 9751 times in 391 packages
SEXP *(STRING_PTR)(SEXP x); // STRING_PTR used 65 times in 14 packages
SEXP * __attribute__((noreturn)) (VECTOR_PTR)(SEXP x);
SEXP (TAG)(SEXP e); // TAG used 513 times in 40 packages
SEXP (CAR)(SEXP e); // CAR used 575 times in 63 packages
SEXP (CDR)(SEXP e); // CDR used 4523 times in 76 packages
SEXP (CAAR)(SEXP e); // CAAR unused
SEXP (CDAR)(SEXP e); // CDAR unused
SEXP (CADR)(SEXP e); // CADR used 104 times in 17 packages
SEXP (CDDR)(SEXP e); // CDDR used 52 times in Rlabkey, Rcpp11, dplyr, proxy, Rcpp, slam, tikzDevice, OpenCL, svd
SEXP (CDDDR)(SEXP e); // CDDDR unused
SEXP (CADDR)(SEXP e); // CADDR used 52 times in 11 packages
SEXP (CADDDR)(SEXP e); // CADDDR used 21 times in RPostgreSQL, foreign, actuar, bibtex
SEXP (CAD4R)(SEXP e); // CAD4R used 14 times in earth, foreign, actuar
int (MISSING)(SEXP x); // MISSING used 125 times in 25 packages
void (SET_MISSING)(SEXP x, int v); // SET_MISSING used 1 times in sprint
void SET_TAG(SEXP x, SEXP y); // SET_TAG used 200 times in 34 packages
SEXP SETCAR(SEXP x, SEXP y); // SETCAR used 4072 times in 47 packages
SEXP SETCDR(SEXP x, SEXP y); // SETCDR used 46 times in 14 packages
SEXP SETCADR(SEXP x, SEXP y); // SETCADR used 112 times in 37 packages
SEXP SETCADDR(SEXP x, SEXP y); // SETCADDR used 45 times in 14 packages
SEXP SETCADDDR(SEXP x, SEXP y); // SETCADDDR used 31 times in 12 packages
SEXP SETCAD4R(SEXP e, SEXP y); // SETCAD4R used 15 times in kergp, Sim.DiffProc, tikzDevice
SEXP CONS_NR(SEXP a, SEXP b); // CONS_NR unused
SEXP (FORMALS)(SEXP x); // FORMALS used 15 times in qtpaint, RSclient, PBSddesolve, Rserve, covr, pryr, rgp, testthat, RandomFields
SEXP (BODY)(SEXP x); // BODY used 48 times in 15 packages
SEXP (CLOENV)(SEXP x); // CLOENV used 23 times in Rcpp11, covr, pomp, Rcpp, pryr, testthat, qtbase
int (RDEBUG)(SEXP x); // RDEBUG used 69 times in rmetasim
int (RSTEP)(SEXP x); // RSTEP unused
int (RTRACE)(SEXP x); // RTRACE unused
void (SET_RDEBUG)(SEXP x, int v); // SET_RDEBUG unused
void (SET_RSTEP)(SEXP x, int v); // SET_RSTEP unused
void (SET_RTRACE)(SEXP x, int v); // SET_RTRACE unused
void SET_FORMALS(SEXP x, SEXP v); // SET_FORMALS used 5 times in covr, rgp, testthat, qtbase
void SET_BODY(SEXP x, SEXP v); // SET_BODY used 6 times in covr, rgp, testthat, qtbase
void SET_CLOENV(SEXP x, SEXP v); // SET_CLOENV used 6 times in covr, rgp, testthat, qtbase
SEXP (PRINTNAME)(SEXP x); // PRINTNAME used 92 times in 29 packages
SEXP (SYMVALUE)(SEXP x); // SYMVALUE unused
SEXP (INTERNAL)(SEXP x); // INTERNAL used 1014 times in 63 packages
int (DDVAL)(SEXP x); // DDVAL unused
void (SET_DDVAL)(SEXP x, int v); // SET_DDVAL unused
void SET_PRINTNAME(SEXP x, SEXP v); // SET_PRINTNAME unused
void SET_SYMVALUE(SEXP x, SEXP v); // SET_SYMVALUE unused
void SET_INTERNAL(SEXP x, SEXP v); // SET_INTERNAL unused
SEXP (FRAME)(SEXP x); // FRAME used 19 times in deTestSet, IRISSeismic, pryr, BayesBridge, datamap, BayesLogit
SEXP (ENCLOS)(SEXP x); // ENCLOS used 7 times in Rcpp, pryr, rJava, Rcpp11, RGtk2
SEXP (HASHTAB)(SEXP x); // HASHTAB used 12 times in Rcpp, pryr, datamap, Rcpp11, qtbase
int (ENVFLAGS)(SEXP x); // ENVFLAGS unused
void (SET_ENVFLAGS)(SEXP x, int v); // SET_ENVFLAGS unused
void SET_FRAME(SEXP x, SEXP v); // SET_FRAME used 4 times in rgp, mmap, qtbase
void SET_ENCLOS(SEXP x, SEXP v); // SET_ENCLOS used 7 times in rgp, RandomFields, mmap, qtbase
void SET_HASHTAB(SEXP x, SEXP v); // SET_HASHTAB used 5 times in rgp, mmap, qtbase
SEXP (PRCODE)(SEXP x); // PRCODE used 15 times in dplyr, Rcpp, pryr, Rcpp11
SEXP (PRENV)(SEXP x); // PRENV used 14 times in igraph, dplyr, Rcpp, pryr, Rcpp11, lazyeval
SEXP (PRVALUE)(SEXP x); // PRVALUE used 12 times in dplyr, Rcpp, pryr, Rcpp11
int (PRSEEN)(SEXP x); // PRSEEN used 4 times in Rcpp, Rcpp11
void (SET_PRSEEN)(SEXP x, int v); // SET_PRSEEN unused
void SET_PRENV(SEXP x, SEXP v); // SET_PRENV unused
void SET_PRVALUE(SEXP x, SEXP v); // SET_PRVALUE unused
void SET_PRCODE(SEXP x, SEXP v); // SET_PRCODE unused
void SET_PRSEEN(SEXP x, int v); // SET_PRSEEN unused
int (HASHASH)(SEXP x); // HASHASH unused
int (HASHVALUE)(SEXP x); // HASHVALUE unused
void (SET_HASHASH)(SEXP x, int v); // SET_HASHASH unused
void (SET_HASHVALUE)(SEXP x, int v); // SET_HASHVALUE unused
typedef int PROTECT_INDEX; // PROTECT_INDEX used 94 times in 27 packages
extern SEXP R_GlobalEnv; // R_GlobalEnv used 1400 times in 79 packages
extern SEXP R_EmptyEnv; // R_EmptyEnv used 16 times in Rserve, dplR, Rcpp11, Rcpp, RcppClassic, pryr, rJava, adaptivetau, qtbase
extern SEXP R_BaseEnv; // R_BaseEnv used 27 times in 15 packages
extern SEXP R_BaseNamespace; // R_BaseNamespace used 3 times in Rcpp, Rcpp11
extern SEXP R_NamespaceRegistry; // R_NamespaceRegistry used 3 times in devtools, namespace, Rcpp
extern SEXP R_Srcref; // R_Srcref unused
extern SEXP R_NilValue; // R_NilValue used 10178 times in 491 packages
extern SEXP R_UnboundValue; // R_UnboundValue used 73 times in 23 packages
extern SEXP R_MissingArg; // R_MissingArg used 21 times in 12 packages
extern
SEXP R_RestartToken; // R_RestartToken unused
extern SEXP R_baseSymbol; // R_baseSymbol unused
extern SEXP R_BaseSymbol; // R_BaseSymbol unused
extern SEXP R_BraceSymbol; // R_BraceSymbol unused
extern SEXP R_Bracket2Symbol; // R_Bracket2Symbol used 4 times in purrr
extern SEXP R_BracketSymbol; // R_BracketSymbol unused
extern SEXP R_ClassSymbol; // R_ClassSymbol used 311 times in 84 packages
extern SEXP R_DeviceSymbol; // R_DeviceSymbol unused
extern SEXP R_DimNamesSymbol; // R_DimNamesSymbol used 230 times in 51 packages
extern SEXP R_DimSymbol; // R_DimSymbol used 1015 times in 170 packages
extern SEXP R_DollarSymbol; // R_DollarSymbol used 6 times in dplyr, Rcpp, Rcpp11
extern SEXP R_DotsSymbol; // R_DotsSymbol used 13 times in RPostgreSQL, RcppDE, lbfgs, purrr, RMySQL, DEoptim, qtbase
extern SEXP R_DoubleColonSymbol; // R_DoubleColonSymbol unused
extern SEXP R_DropSymbol; // R_DropSymbol unused
extern SEXP R_LastvalueSymbol; // R_LastvalueSymbol unused
extern SEXP R_LevelsSymbol; // R_LevelsSymbol used 51 times in 17 packages
extern SEXP R_ModeSymbol; // R_ModeSymbol unused
extern SEXP R_NaRmSymbol; // R_NaRmSymbol used 2 times in dplyr
extern SEXP R_NameSymbol; // R_NameSymbol used 2 times in qtbase
extern SEXP R_NamesSymbol; // R_NamesSymbol used 1373 times in 249 packages
extern SEXP R_NamespaceEnvSymbol; // R_NamespaceEnvSymbol unused
extern SEXP R_PackageSymbol; // R_PackageSymbol used 2 times in Rmosek, HiPLARM
extern SEXP R_PreviousSymbol; // R_PreviousSymbol unused
extern SEXP R_QuoteSymbol; // R_QuoteSymbol unused
extern SEXP R_RowNamesSymbol; // R_RowNamesSymbol used 97 times in 37 packages
extern SEXP R_SeedsSymbol; // R_SeedsSymbol used 2 times in treatSens
extern SEXP R_SortListSymbol; // R_SortListSymbol unused
extern SEXP R_SourceSymbol; // R_SourceSymbol unused
extern SEXP R_SpecSymbol; // R_SpecSymbol unused
extern SEXP R_TripleColonSymbol; // R_TripleColonSymbol unused
extern SEXP R_TspSymbol; // R_TspSymbol unused
extern SEXP R_dot_defined; // R_dot_defined unused
extern SEXP R_dot_Method; // R_dot_Method unused
extern SEXP R_dot_packageName; // R_dot_packageName unused
extern SEXP R_dot_target; // R_dot_target unused
extern SEXP R_NaString; // R_NaString used 36 times in stringdist, RCurl, RSclient, uniqueAtomMat, XML, Rserve, Rblpapi, SoundexBR, rJava, iotools
// NA_STRING used 574 times in 90 packages
extern SEXP R_BlankString; // R_BlankString used 39 times in 13 packages
extern SEXP R_BlankScalarString; // R_BlankScalarString unused
SEXP R_GetCurrentSrcref(int); // R_GetCurrentSrcref unused
SEXP R_GetSrcFilename(SEXP); // R_GetSrcFilename unused
SEXP Rf_asChar(SEXP); // Rf_asChar used 246 times in 16 packages
// asChar used 194 times in 36 packages
SEXP Rf_coerceVector(SEXP, SEXPTYPE); // Rf_coerceVector used 44 times in 13 packages
// coerceVector used 2585 times in 167 packages
SEXP Rf_PairToVectorList(SEXP x); // Rf_PairToVectorList unused
// PairToVectorList used 7 times in cba, rcdd
SEXP Rf_VectorToPairList(SEXP x); // Rf_VectorToPairList unused
// VectorToPairList used 13 times in pomp, arules
SEXP Rf_asCharacterFactor(SEXP x); // Rf_asCharacterFactor used 3 times in tidyr, reshape2, RSQLite
// asCharacterFactor used 11 times in fastmatch, Kmisc, data.table
int Rf_asLogical(SEXP x); // Rf_asLogical used 45 times in 11 packages
// asLogical used 462 times in 64 packages
int Rf_asInteger(SEXP x); // Rf_asInteger used 746 times in 23 packages
// asInteger used 1277 times in 140 packages
double Rf_asReal(SEXP x); // Rf_asReal used 113 times in 17 packages
// asReal used 383 times in 83 packages
Rcomplex Rf_asComplex(SEXP x); // Rf_asComplex unused
// asComplex used 1 times in ff
typedef struct R_allocator R_allocator_t;
char * Rf_acopy_string(const char *); // Rf_acopy_string unused
// acopy_string used 10 times in splusTimeDate
void Rf_addMissingVarsToNewEnv(SEXP, SEXP); // Rf_addMissingVarsToNewEnv unused
// addMissingVarsToNewEnv unused
SEXP Rf_alloc3DArray(SEXPTYPE, int, int, int); // Rf_alloc3DArray unused
// alloc3DArray used 21 times in mcmc, msm, TPmsm, unfoldr, RandomFields, cplm
SEXP Rf_allocArray(SEXPTYPE, SEXP); // Rf_allocArray used 4 times in h5
// allocArray used 24 times in unfoldr, kergp, pomp, proxy, kza, slam, mvMORPH, TPmsm, ouch, RandomFields
SEXP Rf_allocFormalsList2(SEXP sym1, SEXP sym2); // Rf_allocFormalsList2 unused
// allocFormalsList2 unused
SEXP Rf_allocFormalsList3(SEXP sym1, SEXP sym2, SEXP sym3); // Rf_allocFormalsList3 unused
// allocFormalsList3 unused
SEXP Rf_allocFormalsList4(SEXP sym1, SEXP sym2, SEXP sym3, SEXP sym4); // Rf_allocFormalsList4 unused
// allocFormalsList4 unused
SEXP Rf_allocFormalsList5(SEXP sym1, SEXP sym2, SEXP sym3, SEXP sym4, SEXP sym5); // Rf_allocFormalsList5 unused
// allocFormalsList5 unused
SEXP Rf_allocFormalsList6(SEXP sym1, SEXP sym2, SEXP sym3, SEXP sym4, SEXP sym5, SEXP sym6); // Rf_allocFormalsList6 unused
// allocFormalsList6 unused
SEXP Rf_allocMatrix(SEXPTYPE, int, int); // Rf_allocMatrix used 122 times in 14 packages
// allocMatrix used 1577 times in 244 packages
SEXP Rf_allocList(int); // Rf_allocList unused
// allocList used 60 times in 25 packages
SEXP Rf_allocS4Object(void); // Rf_allocS4Object used 2 times in Rserve, RSclient
// allocS4Object used 1 times in arules
SEXP Rf_allocSExp(SEXPTYPE); // Rf_allocSExp unused
// allocSExp used 14 times in igraph, rgp, data.table, RandomFields, mmap, qtbase
SEXP Rf_allocVector3(SEXPTYPE, R_xlen_t, R_allocator_t*); // Rf_allocVector3 unused
// allocVector3 unused
R_xlen_t Rf_any_duplicated(SEXP x, Rboolean from_last); // Rf_any_duplicated unused
// any_duplicated used 5 times in data.table, checkmate
R_xlen_t Rf_any_duplicated3(SEXP x, SEXP incomp, Rboolean from_last); // Rf_any_duplicated3 unused
// any_duplicated3 unused
SEXP Rf_applyClosure(SEXP, SEXP, SEXP, SEXP, SEXP); // Rf_applyClosure unused
// applyClosure unused
SEXP Rf_arraySubscript(int, SEXP, SEXP, SEXP (*)(SEXP,SEXP),
SEXP (*)(SEXP, int), SEXP);
SEXP Rf_classgets(SEXP, SEXP); // Rf_classgets used 27 times in fts, clpAPI, cplexAPI, sybilSBML, Rblpapi, glpkAPI
// classgets used 91 times in 30 packages
SEXP Rf_cons(SEXP, SEXP); // Rf_cons used 39 times in dplyr, Rcpp, Rcpp11
// cons used 609 times in 39 packages
void Rf_copyMatrix(SEXP, SEXP, Rboolean); // Rf_copyMatrix used 8 times in CNVassoc
// copyMatrix used 7 times in BDgraph, Matrix, kza
void Rf_copyListMatrix(SEXP, SEXP, Rboolean); // Rf_copyListMatrix unused
// copyListMatrix used 1 times in Matrix
void Rf_copyMostAttrib(SEXP, SEXP); // Rf_copyMostAttrib used 8 times in tidyr, purrr, Rcpp, reshape2
// copyMostAttrib used 68 times in arules, robustbase, data.table, xts, memisc, proxy, zoo, tau
void Rf_copyVector(SEXP, SEXP); // Rf_copyVector unused
// copyVector used 12 times in tm, kza, mlegp, adaptivetau
int Rf_countContexts(int, int); // Rf_countContexts unused
// countContexts unused
SEXP Rf_CreateTag(SEXP); // Rf_CreateTag unused
// CreateTag used 1 times in rgp
void Rf_defineVar(SEXP, SEXP, SEXP); // Rf_defineVar used 7 times in purrr, Rcpp, Rserve, Rcpp11
// defineVar used 218 times in 38 packages
SEXP Rf_dimgets(SEXP, SEXP); // Rf_dimgets unused
// dimgets used 3 times in CorrBin
SEXP Rf_dimnamesgets(SEXP, SEXP); // Rf_dimnamesgets unused
// dimnamesgets used 24 times in Matrix, RxCEcolInf, lxb, sapa
SEXP Rf_DropDims(SEXP); // Rf_DropDims unused
// DropDims unused
SEXP Rf_duplicate(SEXP); // Rf_duplicate used 21 times in XML, data.table, Rcpp11, lme4, dplyr, Rcpp, RcppClassic, grr, NMF, copula
// duplicate used 2088 times in 224 packages
SEXP Rf_shallow_duplicate(SEXP); // Rf_shallow_duplicate unused
// shallow_duplicate used 2 times in tmlenet, smint
SEXP Rf_lazy_duplicate(SEXP); // Rf_lazy_duplicate unused
// lazy_duplicate unused
SEXP Rf_duplicated(SEXP, Rboolean); // Rf_duplicated unused
// duplicated used 402 times in 100 packages
Rboolean R_envHasNoSpecialSymbols(SEXP); // R_envHasNoSpecialSymbols unused
SEXP Rf_eval(SEXP, SEXP); // Rf_eval used 105 times in 24 packages
// eval used 25178 times in 269 packages
SEXP Rf_findFun(SEXP, SEXP); // Rf_findFun used 7 times in Rcpp, Rcpp11, littler, RGtk2
// findFun used 13 times in sprint, tikzDevice, yaml, unfoldr, TraMineR, RGtk2
SEXP Rf_findVar(SEXP, SEXP); // Rf_findVar used 19 times in R2SWF, Rcpp11, dplyr, Rcpp, pryr, rJava, littler, showtext
// findVar used 1333 times in 24 packages
SEXP Rf_findVarInFrame(SEXP, SEXP); // Rf_findVarInFrame used 7 times in RCurl, Rcpp, Rcpp11
// findVarInFrame used 101 times in 13 packages
SEXP Rf_findVarInFrame3(SEXP, SEXP, Rboolean); // Rf_findVarInFrame3 used 1 times in pryr
// findVarInFrame3 used 5 times in datamap
SEXP Rf_getAttrib(SEXP, SEXP); // Rf_getAttrib used 256 times in 36 packages
// getAttrib used 1930 times in 239 packages
SEXP Rf_GetArrayDimnames(SEXP); // Rf_GetArrayDimnames unused
// GetArrayDimnames unused
SEXP Rf_GetColNames(SEXP); // Rf_GetColNames unused
// GetColNames unused
void Rf_GetMatrixDimnames(SEXP, SEXP*, SEXP*, const char**, const char**); // Rf_GetMatrixDimnames unused
// GetMatrixDimnames used 2 times in Kmisc, optmatch
SEXP Rf_GetOption(SEXP, SEXP); // Rf_GetOption unused
// GetOption used 5 times in rgl, gmp, Cairo, RGtk2
SEXP Rf_GetOption1(SEXP); // Rf_GetOption1 used 5 times in RProtoBuf, gmp
// GetOption1 used 1 times in PCICt
int Rf_GetOptionDigits(void); // Rf_GetOptionDigits unused
// GetOptionDigits unused
int Rf_GetOptionWidth(void); // Rf_GetOptionWidth used 1 times in progress
// GetOptionWidth unused
SEXP Rf_GetRowNames(SEXP); // Rf_GetRowNames unused
// GetRowNames unused
void Rf_gsetVar(SEXP, SEXP, SEXP); // Rf_gsetVar unused
// gsetVar used 4 times in RSVGTipsDevice, Cairo, RSvgDevice, JavaGD
SEXP Rf_install(const char *); // Rf_install used 990 times in 50 packages
// install used 3178 times in 224 packages
SEXP Rf_installChar(SEXP); // Rf_installChar used 15 times in dplyr, Rcpp
// installChar used 4 times in dplyr
SEXP Rf_installDDVAL(int i); // Rf_installDDVAL unused
// installDDVAL unused
SEXP Rf_installS3Signature(const char *, const char *); // Rf_installS3Signature unused
// installS3Signature unused
Rboolean Rf_isFree(SEXP); // Rf_isFree unused
// isFree unused
Rboolean Rf_isOrdered(SEXP); // Rf_isOrdered unused
// isOrdered used 65 times in partykit, PythonInR, data.table, RSQLite
Rboolean Rf_isUnordered(SEXP); // Rf_isUnordered used 1 times in OpenMx
// isUnordered used 2 times in PythonInR
Rboolean Rf_isUnsorted(SEXP, Rboolean); // Rf_isUnsorted unused
// isUnsorted unused
SEXP Rf_lengthgets(SEXP, R_len_t); // Rf_lengthgets used 7 times in readxl, readr
// lengthgets used 47 times in 11 packages
SEXP Rf_xlengthgets(SEXP, R_xlen_t); // Rf_xlengthgets unused
// xlengthgets unused
SEXP R_lsInternal(SEXP, Rboolean); // R_lsInternal used 5 times in Rcpp, rJava, Rcpp11, qtbase
SEXP R_lsInternal3(SEXP, Rboolean, Rboolean); // R_lsInternal3 unused
SEXP Rf_match(SEXP, SEXP, int); // Rf_match used 2 times in Rvcg
// match used 8773 times in 388 packages
SEXP Rf_matchE(SEXP, SEXP, int, SEXP); // Rf_matchE unused
// matchE unused
SEXP Rf_namesgets(SEXP, SEXP); // Rf_namesgets used 4 times in OpenMx, rpf
// namesgets used 80 times in 14 packages
SEXP Rf_mkChar(const char *); // Rf_mkChar used 517 times in 32 packages
// mkChar used 4545 times in 287 packages
SEXP Rf_mkCharLen(const char *, int); // Rf_mkCharLen used 21 times in refGenome, redland, Rcpp11, stringi, Kmisc, Rcpp, sourcetools, iotools
// mkCharLen used 38 times in 16 packages
Rboolean Rf_NonNullStringMatch(SEXP, SEXP); // Rf_NonNullStringMatch unused
// NonNullStringMatch used 8 times in proxy, arules, arulesSequences, cba
int Rf_ncols(SEXP); // Rf_ncols used 22 times in fdaPDE, fts, BoomSpikeSlab, Rmosek, ccgarch, rcppbugs, biganalytics, CEC, OpenMx, RTriangle
// ncols used 3805 times in 182 packages
int Rf_nrows(SEXP); // Rf_nrows used 32 times in 12 packages
// nrows used 4332 times in 215 packages
SEXP Rf_nthcdr(SEXP, int); // Rf_nthcdr unused
// nthcdr used 9 times in sprint, rmongodb, PythonInR, xts
typedef enum {Bytes, Chars, Width} nchar_type;
int R_nchar(SEXP string, nchar_type type_, // R_nchar unused
Rboolean allowNA, Rboolean keepNA, const char* msg_name);
Rboolean Rf_pmatch(SEXP, SEXP, Rboolean); // Rf_pmatch unused
// pmatch used 169 times in ore, git2r, AdaptFitOS, data.table, seqminer, locfit, oce, rmumps
Rboolean Rf_psmatch(const char *, const char *, Rboolean); // Rf_psmatch unused
// psmatch used 5 times in rgl
void Rf_PrintValue(SEXP); // Rf_PrintValue used 95 times in 19 packages
// PrintValue used 119 times in 13 packages
void Rf_readS3VarsFromFrame(SEXP, SEXP*, SEXP*, SEXP*, SEXP*, SEXP*, SEXP*); // Rf_readS3VarsFromFrame unused
// readS3VarsFromFrame unused
SEXP Rf_setAttrib(SEXP, SEXP, SEXP); // Rf_setAttrib used 325 times in 35 packages
// setAttrib used 1830 times in 251 packages
void Rf_setSVector(SEXP*, int, SEXP); // Rf_setSVector unused
// setSVector unused
void Rf_setVar(SEXP, SEXP, SEXP); // Rf_setVar used 1 times in showtext
// setVar used 24 times in Rhpc, rscproxy, PythonInR, rgenoud, survival, gsl, littler, spatstat
SEXP Rf_stringSuffix(SEXP, int); // Rf_stringSuffix unused
// stringSuffix unused
SEXPTYPE Rf_str2type(const char *); // Rf_str2type used 4 times in purrr
// str2type used 1 times in RGtk2
Rboolean Rf_StringBlank(SEXP); // Rf_StringBlank used 1 times in LCMCR
// StringBlank unused
SEXP Rf_substitute(SEXP,SEXP); // Rf_substitute unused
// substitute used 255 times in 56 packages
const char * Rf_translateChar(SEXP); // Rf_translateChar used 1 times in devEMF
// translateChar used 59 times in 19 packages
const char * Rf_translateChar0(SEXP); // Rf_translateChar0 unused
// translateChar0 unused
const char * Rf_translateCharUTF8(SEXP); // Rf_translateCharUTF8 used 22 times in Rserve, xml2, readr, gdtools, Rcpp11, dplyr, Rcpp, haven
// translateCharUTF8 used 66 times in 13 packages
const char * Rf_type2char(SEXPTYPE); // Rf_type2char used 33 times in 13 packages
// type2char used 107 times in 12 packages
SEXP Rf_type2rstr(SEXPTYPE); // Rf_type2rstr unused
// type2rstr unused
SEXP Rf_type2str(SEXPTYPE); // Rf_type2str used 4 times in Rcpp, pryr
// type2str used 3 times in Kmisc, yaml
SEXP Rf_type2str_nowarn(SEXPTYPE); // Rf_type2str_nowarn unused
// type2str_nowarn used 1 times in qrmtools
void Rf_unprotect_ptr(SEXP); // Rf_unprotect_ptr unused
// unprotect_ptr unused
void __attribute__((noreturn)) R_signal_protect_error(void);
void __attribute__((noreturn)) R_signal_unprotect_error(void);
void __attribute__((noreturn)) R_signal_reprotect_error(PROTECT_INDEX i);
SEXP R_tryEval(SEXP, SEXP, int *); // R_tryEval used 1118 times in 24 packages
SEXP R_tryEvalSilent(SEXP, SEXP, int *); // R_tryEvalSilent unused
const char *R_curErrorBuf(); // R_curErrorBuf used 4 times in Rhpc, Rcpp11
Rboolean Rf_isS4(SEXP); // Rf_isS4 used 16 times in Rcpp, Rcpp11
// isS4 used 13 times in PythonInR, Rcpp11, dplyr, Rcpp, catnet, rmumps, sdnet
SEXP Rf_asS4(SEXP, Rboolean, int); // Rf_asS4 unused
// asS4 unused
SEXP Rf_S3Class(SEXP); // Rf_S3Class unused
// S3Class used 4 times in RInside, littler
int Rf_isBasicClass(const char *); // Rf_isBasicClass unused
// isBasicClass unused
Rboolean R_cycle_detected(SEXP s, SEXP child); // R_cycle_detected unused
typedef enum {
CE_NATIVE = 0,
CE_UTF8 = 1,
CE_LATIN1 = 2,
CE_BYTES = 3,
CE_SYMBOL = 5,
CE_ANY =99
} cetype_t; // cetype_t used 47 times in 13 packages
cetype_t Rf_getCharCE(SEXP); // Rf_getCharCE used 13 times in RSclient, Rserve, genie, dplyr, Rcpp, rJava, ROracle
// getCharCE used 16 times in ore, RSclient, PythonInR, Rserve, jsonlite, tau, rJava
SEXP Rf_mkCharCE(const char *, cetype_t); // Rf_mkCharCE used 40 times in readxl, mongolite, xml2, readr, Rcpp11, stringi, commonmark, dplyr, Rcpp, haven
// mkCharCE used 72 times in 15 packages
SEXP Rf_mkCharLenCE(const char *, int, cetype_t); // Rf_mkCharLenCE used 68 times in readr, ROracle, stringi
// mkCharLenCE used 23 times in 11 packages
const char *Rf_reEnc(const char *x, cetype_t ce_in, cetype_t ce_out, int subst); // Rf_reEnc used 5 times in RCurl, RSclient, Rserve, rJava
// reEnc used 3 times in PythonInR, RJSONIO
SEXP R_forceAndCall(SEXP e, int n, SEXP rho); // R_forceAndCall unused
SEXP R_MakeExternalPtr(void *p, SEXP tag, SEXP prot); // R_MakeExternalPtr used 321 times in 102 packages
void *R_ExternalPtrAddr(SEXP s); // R_ExternalPtrAddr used 2127 times in 115 packages
SEXP R_ExternalPtrTag(SEXP s); // R_ExternalPtrTag used 195 times in 32 packages
SEXP R_ExternalPtrProtected(SEXP s); // R_ExternalPtrProtected used 6 times in PopGenome, Rcpp, WhopGenome, data.table, Rcpp11
void R_ClearExternalPtr(SEXP s); // R_ClearExternalPtr used 157 times in 64 packages
void R_SetExternalPtrAddr(SEXP s, void *p); // R_SetExternalPtrAddr used 23 times in ff, PopGenome, RCurl, rstream, Rlabkey, WhopGenome, XML, RJSONIO, memisc, ROracle
void R_SetExternalPtrTag(SEXP s, SEXP tag); // R_SetExternalPtrTag used 16 times in PopGenome, rstream, Rlabkey, WhopGenome, Rcpp11, Rcpp, rLindo
void R_SetExternalPtrProtected(SEXP s, SEXP p); // R_SetExternalPtrProtected used 9 times in PopGenome, rstream, Rlabkey, Rcpp, WhopGenome, Rcpp11
typedef void (*R_CFinalizer_t)(SEXP);
void R_RegisterFinalizer(SEXP s, SEXP fun); // R_RegisterFinalizer used 1 times in XML
void R_RegisterCFinalizer(SEXP s, R_CFinalizer_t fun); // R_RegisterCFinalizer used 73 times in 27 packages
void R_RegisterFinalizerEx(SEXP s, SEXP fun, Rboolean onexit); // R_RegisterFinalizerEx unused
void R_RegisterCFinalizerEx(SEXP s, R_CFinalizer_t fun, Rboolean onexit); // R_RegisterCFinalizerEx used 152 times in 58 packages
void R_RunPendingFinalizers(void); // R_RunPendingFinalizers unused
SEXP R_MakeWeakRef(SEXP key, SEXP val, SEXP fin, Rboolean onexit); // R_MakeWeakRef used 4 times in igraph, svd
SEXP R_MakeWeakRefC(SEXP key, SEXP val, R_CFinalizer_t fin, Rboolean onexit); // R_MakeWeakRefC unused
SEXP R_WeakRefKey(SEXP w); // R_WeakRefKey used 3 times in igraph, Rcpp, Rcpp11
SEXP R_WeakRefValue(SEXP w); // R_WeakRefValue used 7 times in igraph, Rcpp, svd, Rcpp11
void R_RunWeakRefFinalizer(SEXP w); // R_RunWeakRefFinalizer used 1 times in igraph
SEXP R_PromiseExpr(SEXP); // R_PromiseExpr unused
SEXP R_ClosureExpr(SEXP); // R_ClosureExpr unused
void R_initialize_bcode(void); // R_initialize_bcode unused
SEXP R_bcEncode(SEXP); // R_bcEncode unused
SEXP R_bcDecode(SEXP); // R_bcDecode unused
Rboolean R_ToplevelExec(void (*fun)(void *), void *data);
SEXP R_ExecWithCleanup(SEXP (*fun)(void *), void *data,
void (*cleanfun)(void *), void *cleandata);
void R_RestoreHashCount(SEXP rho); // R_RestoreHashCount unused
Rboolean R_IsPackageEnv(SEXP rho); // R_IsPackageEnv unused
SEXP R_PackageEnvName(SEXP rho); // R_PackageEnvName unused
SEXP R_FindPackageEnv(SEXP info); // R_FindPackageEnv unused
Rboolean R_IsNamespaceEnv(SEXP rho); // R_IsNamespaceEnv unused
SEXP R_NamespaceEnvSpec(SEXP rho); // R_NamespaceEnvSpec unused
SEXP R_FindNamespace(SEXP info); // R_FindNamespace used 14 times in 11 packages
void R_LockEnvironment(SEXP env, Rboolean bindings); // R_LockEnvironment used 2 times in Rcpp, Rcpp11
Rboolean R_EnvironmentIsLocked(SEXP env); // R_EnvironmentIsLocked used 2 times in Rcpp, Rcpp11
void R_LockBinding(SEXP sym, SEXP env); // R_LockBinding used 3 times in data.table, Rcpp, Rcpp11
void R_unLockBinding(SEXP sym, SEXP env); // R_unLockBinding used 2 times in Rcpp, Rcpp11
void R_MakeActiveBinding(SEXP sym, SEXP fun, SEXP env); // R_MakeActiveBinding unused
Rboolean R_BindingIsLocked(SEXP sym, SEXP env); // R_BindingIsLocked used 2 times in Rcpp, Rcpp11
Rboolean R_BindingIsActive(SEXP sym, SEXP env); // R_BindingIsActive used 2 times in Rcpp, Rcpp11
Rboolean R_HasFancyBindings(SEXP rho); // R_HasFancyBindings unused
void Rf_errorcall(SEXP, const char *, ...) __attribute__((noreturn)); // Rf_errorcall used 27 times in purrr, mongolite, jsonlite, pbdMPI, rJava, openssl
// errorcall used 103 times in RCurl, arules, XML, arulesSequences, pbdMPI, xts, proxy, cba, rJava, RSAP
void Rf_warningcall(SEXP, const char *, ...); // Rf_warningcall used 5 times in pbdMPI, mongolite
// warningcall used 4 times in RInside, jsonlite, pbdMPI
void Rf_warningcall_immediate(SEXP, const char *, ...); // Rf_warningcall_immediate used 2 times in mongolite, V8
// warningcall_immediate used 2 times in Runuran
void R_XDREncodeDouble(double d, void *buf); // R_XDREncodeDouble unused
double R_XDRDecodeDouble(void *buf); // R_XDRDecodeDouble unused
void R_XDREncodeInteger(int i, void *buf); // R_XDREncodeInteger unused
int R_XDRDecodeInteger(void *buf); // R_XDRDecodeInteger unused
typedef void *R_pstream_data_t;
typedef enum {
R_pstream_any_format,
R_pstream_ascii_format,
R_pstream_binary_format,
R_pstream_xdr_format,
R_pstream_asciihex_format
} R_pstream_format_t; // R_pstream_format_t used 7 times in RApiSerialize, Rhpc, fastdigest
typedef struct R_outpstream_st *R_outpstream_t;
struct R_outpstream_st {
R_pstream_data_t data;
R_pstream_format_t type;
int version;
void (*OutChar)(R_outpstream_t, int);
void (*OutBytes)(R_outpstream_t, void *, int);
SEXP (*OutPersistHookFunc)(SEXP, SEXP);
SEXP OutPersistHookData; // OutPersistHookData unused
};
typedef struct R_inpstream_st *R_inpstream_t;
struct R_inpstream_st {
R_pstream_data_t data;
R_pstream_format_t type;
int (*InChar)(R_inpstream_t);
void (*InBytes)(R_inpstream_t, void *, int);
SEXP (*InPersistHookFunc)(SEXP, SEXP);
SEXP InPersistHookData; // InPersistHookData unused
};
void R_InitInPStream(R_inpstream_t stream, R_pstream_data_t data, // R_InitInPStream used 2 times in RApiSerialize, Rhpc
R_pstream_format_t type,
int (*inchar)(R_inpstream_t),
void (*inbytes)(R_inpstream_t, void *, int),
SEXP (*phook)(SEXP, SEXP), SEXP pdata);
void R_InitOutPStream(R_outpstream_t stream, R_pstream_data_t data, // R_InitOutPStream used 4 times in RApiSerialize, Rhpc, fastdigest, qtbase
R_pstream_format_t type, int version,
void (*outchar)(R_outpstream_t, int),
void (*outbytes)(R_outpstream_t, void *, int),
SEXP (*phook)(SEXP, SEXP), SEXP pdata);
void R_InitFileInPStream(R_inpstream_t stream, FILE *fp, // R_InitFileInPStream used 1 times in filehash
R_pstream_format_t type,
SEXP (*phook)(SEXP, SEXP), SEXP pdata);
void R_InitFileOutPStream(R_outpstream_t stream, FILE *fp, // R_InitFileOutPStream unused
R_pstream_format_t type, int version,
SEXP (*phook)(SEXP, SEXP), SEXP pdata);
void R_Serialize(SEXP s, R_outpstream_t ops); // R_Serialize used 4 times in RApiSerialize, Rhpc, fastdigest, qtbase
SEXP R_Unserialize(R_inpstream_t ips); // R_Unserialize used 4 times in RApiSerialize, Rhpc, filehash
SEXP R_do_slot(SEXP obj, SEXP name); // R_do_slot used 162 times in 11 packages
SEXP R_do_slot_assign(SEXP obj, SEXP name, SEXP value); // R_do_slot_assign used 17 times in excel.link, redland, Rcpp11, Matrix, TMB, Rcpp, FREGAT, HiPLARM, OpenMx, rJPSGCS
int R_has_slot(SEXP obj, SEXP name); // R_has_slot used 14 times in Matrix, Rcpp, HiPLARM, OpenMx, Rcpp11
SEXP R_do_MAKE_CLASS(const char *what); // R_do_MAKE_CLASS used 6 times in TMB, Rcpp, Rcpp11
SEXP R_getClassDef (const char *what); // R_getClassDef used 5 times in memisc, Rcpp, Rcpp11
SEXP R_getClassDef_R(SEXP what); // R_getClassDef_R unused
Rboolean R_has_methods_attached(void); // R_has_methods_attached unused
Rboolean R_isVirtualClass(SEXP class_def, SEXP env); // R_isVirtualClass unused
Rboolean R_extends (SEXP class1, SEXP class2, SEXP env); // R_extends unused
SEXP R_do_new_object(SEXP class_def); // R_do_new_object used 9 times in TMB, memisc, Rcpp, Rcpp11
int R_check_class_and_super(SEXP x, const char **valid, SEXP rho); // R_check_class_and_super used 5 times in Matrix, Rmosek, HiPLARM
int R_check_class_etc (SEXP x, const char **valid); // R_check_class_etc used 41 times in Matrix, HiPLARM
void R_PreserveObject(SEXP); // R_PreserveObject used 112 times in 29 packages
void R_ReleaseObject(SEXP); // R_ReleaseObject used 114 times in 27 packages
void R_dot_Last(void); // R_dot_Last used 4 times in RInside, rJava, littler
void R_RunExitFinalizers(void); // R_RunExitFinalizers used 4 times in RInside, TMB, rJava, littler
int R_system(const char *); // R_system used 1 times in rJava
Rboolean R_compute_identical(SEXP, SEXP, int); // R_compute_identical used 14 times in igraph, Matrix, rgp, data.table
void R_orderVector(int *indx, int n, SEXP arglist, Rboolean nalast, Rboolean decreasing); // R_orderVector used 5 times in glpkAPI, nontarget, CEGO
SEXP Rf_allocVector(SEXPTYPE, R_xlen_t); // Rf_allocVector used 1086 times in 59 packages
// allocVector used 12419 times in 551 packages
Rboolean Rf_conformable(SEXP, SEXP); // Rf_conformable unused
// conformable used 141 times in 22 packages
SEXP Rf_elt(SEXP, int); // Rf_elt unused
// elt used 2310 times in 37 packages
Rboolean Rf_inherits(SEXP, const char *); // Rf_inherits used 530 times in 21 packages
// inherits used 814 times in 80 packages
Rboolean Rf_isArray(SEXP); // Rf_isArray unused
// isArray used 34 times in checkmate, PythonInR, data.table, ifultools, Rblpapi, Rvcg, unfoldr, TMB, kza, qtbase
Rboolean Rf_isFactor(SEXP); // Rf_isFactor used 22 times in 11 packages
// isFactor used 42 times in checkmate, rggobi, PythonInR, data.table, Kmisc, partykit, cba, qtbase, RSQLite
Rboolean Rf_isFrame(SEXP); // Rf_isFrame used 1 times in OpenMx
// isFrame used 15 times in checkmate, splusTimeDate, OjaNP, PythonInR, data.table, robfilter
Rboolean Rf_isFunction(SEXP); // Rf_isFunction used 4 times in Rserve, genie, RcppClassic
// isFunction used 274 times in 43 packages
Rboolean Rf_isInteger(SEXP); // Rf_isInteger used 39 times in 14 packages
// isInteger used 402 times in 77 packages
Rboolean Rf_isLanguage(SEXP); // Rf_isLanguage unused
// isLanguage used 63 times in PythonInR, rgp, RandomFields
Rboolean Rf_isList(SEXP); // Rf_isList unused
// isList used 40 times in 11 packages
Rboolean Rf_isMatrix(SEXP); // Rf_isMatrix used 55 times in 16 packages
// isMatrix used 293 times in 65 packages
Rboolean Rf_isNewList(SEXP); // Rf_isNewList used 6 times in Rmosek, RcppClassic
// isNewList used 103 times in 27 packages
Rboolean Rf_isNumber(SEXP); // Rf_isNumber unused
// isNumber used 14 times in PythonInR, readr, stringi, qtbase
Rboolean Rf_isNumeric(SEXP); // Rf_isNumeric used 31 times in Rmosek, gaselect, RcppCNPy, genie, mets, Morpho, rstan, Rcpp, RcppClassic, OpenMx
// isNumeric used 468 times in 49 packages
Rboolean Rf_isPairList(SEXP); // Rf_isPairList unused
// isPairList used 2 times in PythonInR
Rboolean Rf_isPrimitive(SEXP); // Rf_isPrimitive unused
// isPrimitive used 7 times in PythonInR, qtbase
Rboolean Rf_isTs(SEXP); // Rf_isTs unused
// isTs used 2 times in PythonInR
Rboolean Rf_isUserBinop(SEXP); // Rf_isUserBinop unused
// isUserBinop used 2 times in PythonInR
Rboolean Rf_isValidString(SEXP); // Rf_isValidString unused
// isValidString used 26 times in SSN, PythonInR, foreign, pbdMPI, RJSONIO, SASxport
Rboolean Rf_isValidStringF(SEXP); // Rf_isValidStringF unused
// isValidStringF used 2 times in PythonInR
Rboolean Rf_isVector(SEXP); // Rf_isVector used 15 times in RProtoBuf, RcppCNPy, stringi, purrr, RcppClassic, OpenMx, adaptivetau
// isVector used 182 times in 46 packages
Rboolean Rf_isVectorAtomic(SEXP); // Rf_isVectorAtomic used 13 times in agop, tidyr, reshape2, stringi
// isVectorAtomic used 40 times in bit, matrixStats, checkmate, PythonInR, data.table, Matrix, bit64, potts, aster2, qtbase
Rboolean Rf_isVectorList(SEXP); // Rf_isVectorList used 23 times in genie, purrr, RNiftyReg, stringi
// isVectorList used 12 times in RPostgreSQL, spsurvey, PythonInR, stringi, adaptivetau, PCICt, RandomFields
Rboolean Rf_isVectorizable(SEXP); // Rf_isVectorizable unused
// isVectorizable used 3 times in PythonInR, robfilter
SEXP Rf_lang1(SEXP); // Rf_lang1 used 14 times in PopGenome, WhopGenome, nontarget, Rcpp11, purrr, Rcpp, CEGO
// lang1 used 30 times in 11 packages
SEXP Rf_lang2(SEXP, SEXP); // Rf_lang2 used 64 times in 13 packages
// lang2 used 216 times in 75 packages
SEXP Rf_lang3(SEXP, SEXP, SEXP); // Rf_lang3 used 19 times in purrr, RcppDE, Rcpp, lbfgs, emdist, Rcpp11
// lang3 used 107 times in 28 packages
SEXP Rf_lang4(SEXP, SEXP, SEXP, SEXP); // Rf_lang4 used 8 times in lme4, purrr, Rcpp, diversitree, Rcpp11
// lang4 used 65 times in 21 packages
SEXP Rf_lang5(SEXP, SEXP, SEXP, SEXP, SEXP); // Rf_lang5 unused
// lang5 used 11 times in PBSddesolve, GNE, SMC
SEXP Rf_lang6(SEXP, SEXP, SEXP, SEXP, SEXP, SEXP); // Rf_lang6 used 1 times in lme4
// lang6 used 2 times in GNE
SEXP Rf_lastElt(SEXP); // Rf_lastElt unused
// lastElt unused
SEXP Rf_lcons(SEXP, SEXP); // Rf_lcons used 26 times in purrr, rcppbugs, Rcpp, pryr
// lcons used 16 times in rmgarch
R_len_t Rf_length(SEXP); // Rf_length used 662 times in 69 packages
SEXP Rf_list1(SEXP); // Rf_list1 used 1 times in Rcpp
// list1 used 197 times in 11 packages
SEXP Rf_list2(SEXP, SEXP); // Rf_list2 unused
// list2 used 441 times in 12 packages
SEXP Rf_list3(SEXP, SEXP, SEXP); // Rf_list3 unused
// list3 used 72 times in marked, Rdsdp, BH, svd
SEXP Rf_list4(SEXP, SEXP, SEXP, SEXP); // Rf_list4 unused
// list4 used 58 times in igraph, PBSddesolve, Rserve, BH, yaml, treethresh, SMC
SEXP Rf_list5(SEXP, SEXP, SEXP, SEXP, SEXP); // Rf_list5 unused
// list5 used 63 times in Rdsdp, BH
SEXP Rf_listAppend(SEXP, SEXP); // Rf_listAppend unused
// listAppend used 1 times in ore
SEXP Rf_mkNamed(SEXPTYPE, const char **); // Rf_mkNamed used 8 times in Matrix, gmp, RSclient, HiPLARM
// mkNamed used 12 times in RCassandra, coxme, SamplerCompare, survival, JavaGD, DEoptim, qtbase
SEXP Rf_mkString(const char *); // Rf_mkString used 179 times in 24 packages
// mkString used 814 times in 96 packages
int Rf_nlevels(SEXP); // Rf_nlevels unused
// nlevels used 546 times in 26 packages
int Rf_stringPositionTr(SEXP, const char *); // Rf_stringPositionTr unused
// stringPositionTr unused
SEXP Rf_ScalarComplex(Rcomplex); // Rf_ScalarComplex unused
// ScalarComplex unused
SEXP Rf_ScalarInteger(int); // Rf_ScalarInteger used 390 times in 20 packages
// ScalarInteger used 704 times in 88 packages
SEXP Rf_ScalarLogical(int); // Rf_ScalarLogical used 160 times in 20 packages
// ScalarLogical used 450 times in 64 packages
SEXP Rf_ScalarRaw(Rbyte); // Rf_ScalarRaw unused
// ScalarRaw used 4 times in qtbase, RGtk2
SEXP Rf_ScalarReal(double); // Rf_ScalarReal used 146 times in 19 packages
// ScalarReal used 330 times in 65 packages
SEXP Rf_ScalarString(SEXP); // Rf_ScalarString used 33 times in agop, Nippon, Rcpp11, rpf, stringi, purrr, Rcpp
// ScalarString used 198 times in 37 packages
R_xlen_t Rf_xlength(SEXP); // Rf_xlength used 46 times in WGCNA, Rcpp, Rcpp11
SEXP Rf_protect(SEXP); // Rf_protect used 332 times in 12 packages
// protect used 599 times in 101 packages
void Rf_unprotect(int); // Rf_unprotect used 289 times in 12 packages
// unprotect used 110 times in 35 packages
void R_ProtectWithIndex(SEXP, PROTECT_INDEX *); // R_ProtectWithIndex used 8 times in OpenMx
void R_Reprotect(SEXP, PROTECT_INDEX); // R_Reprotect used 2 times in OpenMx
SEXP R_FixupRHS(SEXP x, SEXP y); // R_FixupRHS unused
}
</pre>
== Stats ==
<pre>
0 1 2 3 4 5 6 7 8 9 10+
Macro: 31 5 5 3 4 1 2 2 0 1 53 (usage count)
(107) 31 11 7 1 5 1 3 3 2 2 41 (distinct package count)
Function: 103 15 17 12 16 12 7 7 4 5 190 (usage count)
(388) 103 31 32 22 29 7 9 7 1 6 141 (distinct package count)
Variable: 26 0 4 2 1 0 1 0 0 0 20 (usage count)
(54) 26 4 2 2 0 0 0 1 0 1 18 (distinct package count)
TypeDef: 0 0 0 0 0 0 0 1 0 0 5 (usage count)
(6) 0 0 0 1 0 0 0 0 0 0 5 (distinct package count)
Alias: 40 9 9 8 6 4 0 3 1 1 97 (usage count)
(178) 40 25 14 5 9 1 5 5 3 1 70 (distinct package count)
</pre>
(for a quick explanation of these stats see [[Native_API_stats_of_R.h]])
c7d34a6e03aad9a801fcab17651612b2ed743c14
Native API stats of all header files
0
10
18
2016-06-20T15:20:17Z
Lukasstadler
8
Created page with "== Input == <pre> #define USE_RINTERNALS #include "Rembedded.h" #include "Rmath.h" #include "libintl.h" #include "R.h" #include "Rinterface.h" #include "Rdefines.h" #include "..."
wikitext
text/x-wiki
== Input ==
<pre>
#define USE_RINTERNALS
#include "Rembedded.h"
#include "Rmath.h"
#include "libintl.h"
#include "R.h"
#include "Rinterface.h"
#include "Rdefines.h"
#include "Rinternals.h"
#include "S.h"
#include "R_ext/Applic.h"
#include "R_ext/Arith.h"
#include "R_ext/BLAS.h"
#include "R_ext/Boolean.h"
#include "R_ext/Callbacks.h"
#include "R_ext/Complex.h"
#include "R_ext/Connections.h"
#include "R_ext/Constants.h"
#include "R_ext/Error.h"
#include "R_ext/eventloop.h"
#include "R_ext/GetX11Image.h"
#include "R_ext/GraphicsEngine.h"
#include "R_ext/GraphicsDevice.h"
#include "R_ext/Itermacros.h"
#include "R_ext/Lapack.h"
#include "R_ext/libextern.h"
#include "R_ext/Linpack.h"
#include "R_ext/MathThreads.h"
#include "R_ext/Memory.h"
#include "R_ext/Parse.h"
#include "R_ext/Print.h"
#include "R_ext/PrtUtil.h"
#include "R_ext/QuartzDevice.h"
#include "R_ext/R-ftp-http.h"
#include "R_ext/Rallocators.h"
#include "R_ext/Random.h"
#include "R_ext/Rdynload.h"
#include "R_ext/Riconv.h"
#include "R_ext/RS.h"
#include "R_ext/RStartup.h"
#include "R_ext/stats_package.h"
#include "R_ext/stats_stubs.h"
#include "R_ext/Utils.h"
#include "R_ext/Visibility.h"
</pre>
== Result ==
<pre>
#define ANYSXP 18 // ANYSXP used 14 times in RPostgreSQL, Rcpp11, seqminer, Rcpp, pryr, rtkpp, rtkore, RGtk2
#define AS_CHARACTER(x) Rf_coerceVector(x,16) // AS_CHARACTER used 115 times in 27 packages
#define AS_COMPLEX(x) Rf_coerceVector(x,15) // AS_COMPLEX used 28 times in PearsonDS, kza, diversitree
#define AS_INTEGER(x) Rf_coerceVector(x,13) // AS_INTEGER used 753 times in 66 packages
#define AS_LIST(x) Rf_coerceVector(x,19) // AS_LIST used 81 times in RPostgreSQL, lfe, CRF, memisc, catnet, polyclip, sdnet
#define AS_LOGICAL(x) Rf_coerceVector(x,10) // AS_LOGICAL used 59 times in 13 packages
#define AS_NUMERIC(x) Rf_coerceVector(x,14) // AS_NUMERIC used 1099 times in 71 packages
#define AS_RAW(x) Rf_coerceVector(x,24) // AS_RAW used 24 times in CrypticIBDcheck, IRISSeismic, seqinr, oce
#define AS_VECTOR(x) Rf_coerceVector(x,19) // AS_VECTOR used 3 times in catnet, PET, sdnet
#define ATTRIB(x) ((x)->attrib) // ATTRIB used 83 times in 20 packages
#define AdobeSymbol2utf8 Rf_AdobeSymbol2utf8 // AdobeSymbol2utf8 used 2 times in Cairo
#define BCODESXP 21 // BCODESXP used 15 times in rcppbugs, Rcpp11, seqminer, Rcpp, pryr, rtkpp, rtkore
#define BCODE_CODE(x) ((x)->u.listsxp.carval) // BCODE_CODE unused
#define BCODE_CONSTS(x) ((x)->u.listsxp.cdrval) // BCODE_CONSTS unused
#define BCODE_EXPR(x) ((x)->u.listsxp.tagval) // BCODE_EXPR unused
#define BEGIN_SUSPEND_INTERRUPTS do { Rboolean __oldsusp__ = R_interrupts_suspended; R_interrupts_suspended = 1; // BEGIN_SUSPEND_INTERRUPTS used 22 times in 12 packages
#define BLAS_extern extern // BLAS_extern used 2 times in sparseSEM
#define BODY(x) ((x)->u.closxp.body) // BODY used 48 times in 15 packages
#define BODY_EXPR(e) R_ClosureExpr(e) // BODY_EXPR unused
#define BUILTINSXP 8 // BUILTINSXP used 24 times in 11 packages
#define CAAR(e) ((((e)->u.listsxp.carval))->u.listsxp.carval) // CAAR unused
#define CAD4R(e) ((((((((((e)->u.listsxp.cdrval))->u.listsxp.cdrval))->u.listsxp.cdrval))->u.listsxp.cdrval))->u.listsxp.carval) // CAD4R used 14 times in earth, foreign, actuar
#define CADDDR(e) ((((((((e)->u.listsxp.cdrval))->u.listsxp.cdrval))->u.listsxp.cdrval))->u.listsxp.carval) // CADDDR used 21 times in RPostgreSQL, foreign, actuar, bibtex
#define CADDR(e) ((((((e)->u.listsxp.cdrval))->u.listsxp.cdrval))->u.listsxp.carval) // CADDR used 52 times in 11 packages
#define CADR(e) ((((e)->u.listsxp.cdrval))->u.listsxp.carval) // CADR used 104 times in 17 packages
#define CAR(e) ((e)->u.listsxp.carval) // CAR used 575 times in 63 packages
#define CDAR(e) ((((e)->u.listsxp.carval))->u.listsxp.cdrval) // CDAR unused
#define CDDDR(e) ((((((e)->u.listsxp.cdrval))->u.listsxp.cdrval))->u.listsxp.cdrval) // CDDDR unused
#define CDDR(e) ((((e)->u.listsxp.cdrval))->u.listsxp.cdrval) // CDDR used 52 times in Rlabkey, Rcpp11, dplyr, proxy, Rcpp, slam, tikzDevice, OpenCL, svd
#define CDR(e) ((e)->u.listsxp.cdrval) // CDR used 4523 times in 76 packages
#define CHAR(x) ((const char *) (((SEXPREC_ALIGN *) (x)) + 1)) // CHAR used 4405 times in 362 packages
#define CHARACTER_DATA(x) (((SEXP *) (((SEXPREC_ALIGN *) (x)) + 1))) // CHARACTER_DATA used 22 times in pomp, rggobi, XML, RGtk2
#define CHARACTER_POINTER(x) ((SEXP *) (((SEXPREC_ALIGN *) (x)) + 1)) // CHARACTER_POINTER used 19 times in multic, RPostgreSQL, arules, Hmisc, lazy, R4dfp
#define CHARACTER_VALUE(x) ((const char *) (((SEXPREC_ALIGN *) (Rf_asChar(x))) + 1)) // CHARACTER_VALUE used 269 times in 11 packages
#define CHARSXP 9 // CHARSXP used 106 times in 33 packages
#define CLOENV(x) ((x)->u.closxp.env) // CLOENV used 23 times in Rcpp11, covr, pomp, Rcpp, pryr, testthat, qtbase
#define CLOSXP 3 // CLOSXP used 83 times in 30 packages
#define COMPLEX(x) ((Rcomplex *) (((SEXPREC_ALIGN *) (x)) + 1)) // COMPLEX used 1697 times in 71 packages
#define COMPLEX_DATA(x) (((Rcomplex *) (((SEXPREC_ALIGN *) (x)) + 1))) // COMPLEX_DATA unused
#define COMPLEX_POINTER(x) ((Rcomplex *) (((SEXPREC_ALIGN *) (x)) + 1)) // COMPLEX_POINTER used 3 times in timsac, ifs
#define CONS(a, b) Rf_cons((a), (b)) // CONS used 458 times in 30 packages
#define COPY_TO_USER_STRING(x) Rf_mkChar(x) // COPY_TO_USER_STRING used 374 times in 21 packages
#define CPLXSXP 15 // CPLXSXP used 409 times in 49 packages
#define CREATE_FUNCTION_CALL(name, argList) createFunctionCall(name, argList) // CREATE_FUNCTION_CALL used 5 times in rggobi, XML, RGtk2
#define CREATE_STRING_VECTOR(x) Rf_mkChar(x) // CREATE_STRING_VECTOR used 244 times in igraph, rggobi, XML, dbarts, lazy, rwt, RGtk2
#define Calloc(n, t) (t *) R_chk_calloc( (size_t) (n), sizeof(t) ) // Calloc used 5657 times in 240 packages
#define CallocCharBuf(n) (char *) R_chk_calloc((size_t) ((n)+1), sizeof(char)) // CallocCharBuf used 3 times in cplexAPI, patchDVI
#define CreateAtVector Rf_CreateAtVector // CreateAtVector unused
#define CreateTag Rf_CreateTag // CreateTag used 1 times in rgp
#define DATAPTR(x) (((SEXPREC_ALIGN *) (x)) + 1) // DATAPTR used 113 times in 11 packages
#define DDVAL(x) ((x)->sxpinfo.gp & 1) // DDVAL unused
#define DDVAL_MASK 1 // DDVAL_MASK unused
#define DECREMENT_REFCNT(x) do {} while(0) // DECREMENT_REFCNT unused
#define DISABLE_REFCNT(x) do {} while(0) // DISABLE_REFCNT unused
#define DOTSXP 17 // DOTSXP used 16 times in RPostgreSQL, PythonInR, Rcpp11, seqminer, Rcpp, pryr, rtkpp, spikeSlabGAM, rtkore
#define DOUBLE_DATA(x) (((double *) (((SEXPREC_ALIGN *) (x)) + 1))) // DOUBLE_DATA used 9 times in bigalgebra
#define DOUBLE_DIGITS 53 // DOUBLE_DIGITS used 42 times in evd
#define DOUBLE_EPS 2.2204460492503131e-16 // DOUBLE_EPS used 180 times in 40 packages
#define DOUBLE_XMAX 1.7976931348623157e+308 // DOUBLE_XMAX used 63 times in 13 packages
#define DOUBLE_XMIN 2.2250738585072014e-308 // DOUBLE_XMIN used 13 times in unmarked, deSolve, ifultools, spatstat
#define DropDims Rf_DropDims // DropDims unused
#define ENABLE_NLS 1 // ENABLE_NLS used 80 times in 59 packages
#define ENABLE_REFCNT(x) do {} while(0) // ENABLE_REFCNT unused
#define ENCLOS(x) ((x)->u.envsxp.enclos) // ENCLOS used 7 times in Rcpp, pryr, rJava, Rcpp11, RGtk2
#define END_SUSPEND_INTERRUPTS R_interrupts_suspended = __oldsusp__; if (R_interrupts_pending && ! R_interrupts_suspended) Rf_onintr(); } while(0) // END_SUSPEND_INTERRUPTS used 18 times in 12 packages
#define ENVFLAGS(x) ((x)->sxpinfo.gp) // ENVFLAGS unused
#define ENVSXP 4 // ENVSXP used 63 times in 25 packages
#define ERROR <defined> // ERROR used 6406 times in 293 packages
#define EVAL(x) Rf_eval(x,R_GlobalEnv) // EVAL used 108 times in 13 packages
#define EXPRSXP 20 // EXPRSXP used 84 times in 14 packages
#define EXTPTRSXP 22 // EXTPTRSXP used 386 times in 55 packages
#define EXTPTR_PROT(x) ((x)->u.listsxp.cdrval) // EXTPTR_PROT used 5 times in rJava, pryr
#define EXTPTR_PTR(x) ((x)->u.listsxp.carval) // EXTPTR_PTR used 428 times in 15 packages
#define EXTPTR_TAG(x) ((x)->u.listsxp.tagval) // EXTPTR_TAG used 9 times in excel.link, pryr, rJava, gsl
#define EncodeComplex Rf_EncodeComplex // EncodeComplex unused
#define EncodeInteger Rf_EncodeInteger // EncodeInteger used 2 times in qtbase, RGtk2
#define EncodeLogical Rf_EncodeLogical // EncodeLogical used 2 times in qtbase, RGtk2
#define EncodeReal Rf_EncodeReal // EncodeReal used 2 times in qtbase, RGtk2
#define EncodeReal0 Rf_EncodeReal0 // EncodeReal0 unused
#define F77_CALL(x) x_ // F77_CALL used 4269 times in 195 packages
#define F77_COM(x) x_ // F77_COM used 2 times in igraph
#define F77_COMDECL(x) x_ // F77_COMDECL used 2 times in igraph
#define F77_NAME(x) x_ // F77_NAME used 1913 times in 117 packages
#define F77_SUB(x) x_ // F77_SUB used 771 times in 89 packages
#define FALSE 0 // FALSE used 17931 times in 545 packages
#define FORMALS(x) ((x)->u.closxp.formals) // FORMALS used 15 times in qtpaint, RSclient, PBSddesolve, Rserve, covr, pryr, rgp, testthat, RandomFields
#define FRAME(x) ((x)->u.envsxp.frame) // FRAME used 19 times in deTestSet, IRISSeismic, pryr, BayesBridge, datamap, BayesLogit
#define FREESXP 31 // FREESXP used 4 times in rtkpp, rtkore
#define FUNSXP 99 // FUNSXP used 6 times in dplyr, rtkpp, data.table, rtkore
#define Free(p) (R_chk_free( (void *)(p) ), (p) = __null) // Free used 21329 times in 683 packages
#define GAxisPars Rf_GAxisPars // GAxisPars unused
#define GETX11IMAGE_H_ // GETX11IMAGE_H_ unused
#define GET_ATTR(x,what) Rf_getAttrib(x, what) // GET_ATTR used 66 times in kergp, rggobi, XML, maptools, dbarts, RGtk2
#define GET_CLASS(x) Rf_getAttrib(x, R_ClassSymbol) // GET_CLASS used 56 times in 17 packages
#define GET_COLNAMES(x) Rf_GetColNames(x) // GET_COLNAMES used 14 times in multic, pomp
#define GET_DIM(x) Rf_getAttrib(x, R_DimSymbol) // GET_DIM used 421 times in 55 packages
#define GET_DIMNAMES(x) Rf_getAttrib(x, R_DimNamesSymbol) // GET_DIMNAMES used 60 times in multic, lfe, pomp, adaptivetau
#define GET_LENGTH(x) Rf_length(x) // GET_LENGTH used 1265 times in 28 packages
#define GET_LEVELS(x) Rf_getAttrib(x, R_LevelsSymbol) // GET_LEVELS used 13 times in rjson, cba, yaml
#define GET_NAMES(x) Rf_getAttrib(x, R_NamesSymbol) // GET_NAMES used 84 times in 22 packages
#define GET_ROWNAMES(x) Rf_GetRowNames(x) // GET_ROWNAMES used 46 times in multic, pomp, RSQLite
#define GET_SLOT(x, what) R_do_slot(x, what) // GET_SLOT used 1680 times in 42 packages
#define GET_TSP(x) Rf_getAttrib(x, R_TspSymbol) // GET_TSP unused
#define GetArrayDimnames Rf_GetArrayDimnames // GetArrayDimnames unused
#define GetColNames Rf_GetColNames // GetColNames unused
#define GetMatrixDimnames Rf_GetMatrixDimnames // GetMatrixDimnames used 2 times in Kmisc, optmatch
#define GetOption Rf_GetOption // GetOption used 5 times in rgl, gmp, Cairo, RGtk2
#define GetOption1 Rf_GetOption1 // GetOption1 used 1 times in PCICt
#define GetOptionDigits Rf_GetOptionDigits // GetOptionDigits unused
#define GetOptionWidth Rf_GetOptionWidth // GetOptionWidth unused
#define GetRowNames Rf_GetRowNames // GetRowNames unused
#define HASHTAB(x) ((x)->u.envsxp.hashtab) // HASHTAB used 12 times in Rcpp, pryr, datamap, Rcpp11, qtbase
#define HAVE_ALLOCA_H 1 // HAVE_ALLOCA_H used 15 times in treatSens, Matrix, TMB, pbdZMQ, ore, dbarts
#define HAVE_AQUA 1 // HAVE_AQUA used 13 times in 11 packages
#define HAVE_EXPM1 1 // HAVE_EXPM1 used 4 times in igraph, Rcpp, BiasedUrn, Rcpp11
#define HAVE_F77_UNDERSCORE 1 // HAVE_F77_UNDERSCORE used 2 times in igraph
#define HAVE_HYPOT 1 // HAVE_HYPOT used 6 times in BH, Rcpp, Rcpp11
#define HAVE_LOG1P 1 // HAVE_LOG1P used 3 times in igraph, Rcpp, Rcpp11
#define HAVE_WORKING_LOG1P 1 // HAVE_WORKING_LOG1P unused
#define IEEE_754 1 // IEEE_754 used 47 times in igraph, Rcpp, data.table, stringi
#define INCREMENT_NAMED(x) do { SEXP __x__ = (x); if (((__x__)->sxpinfo.named) != 2) (((__x__)->sxpinfo.named)=(((__x__)->sxpinfo.named) + 1)); } while (0) // INCREMENT_NAMED unused
#define INCREMENT_REFCNT(x) do {} while(0) // INCREMENT_REFCNT unused
#define INLINE_PROTECT // INLINE_PROTECT unused
#define INTEGER(x) ((int *) (((SEXPREC_ALIGN *) (x)) + 1)) // INTEGER used 41659 times in 758 packages
#define INTEGER_DATA(x) (((int *) (((SEXPREC_ALIGN *) (x)) + 1))) // INTEGER_DATA used 246 times in RPostgreSQL, excel.link, rggobi, XML, biganalytics, RTextTools, bcp, RGtk2
#define INTEGER_POINTER(x) ((int *) (((SEXPREC_ALIGN *) (x)) + 1)) // INTEGER_POINTER used 2082 times in 83 packages
#define INTEGER_VALUE(x) Rf_asInteger(x) // INTEGER_VALUE used 451 times in 47 packages
#define INTERNAL(x) ((x)->u.symsxp.internal) // INTERNAL used 1014 times in 63 packages
#define INTSXP 13 // INTSXP used 6341 times in 471 packages
#define ISNA(x) R_IsNA(x) // ISNA used 649 times in 100 packages
#define ISNAN(x) R_isnancpp(x) // ISNAN used 1342 times in 146 packages
#define IS_CHARACTER(x) (((x)->sxpinfo.type) == 16) // IS_CHARACTER used 45 times in 16 packages
#define IS_COMPLEX(x) (((x)->sxpinfo.type) == 15) // IS_COMPLEX used 2 times in rjson, spsurvey
#define IS_GETTER_CALL(call) (((((call)->u.listsxp.cdrval))->u.listsxp.carval) == R_TmpvalSymbol) // IS_GETTER_CALL unused
#define IS_INTEGER(x) Rf_isInteger(x) // IS_INTEGER used 61 times in 19 packages
#define IS_LIST(x) Rf_isVector(x) // IS_LIST used 12 times in RPostgreSQL, Runuran, XML, PythonInR, ROracle
#define IS_LOGICAL(x) (((x)->sxpinfo.type) == 10) // IS_LOGICAL used 28 times in 12 packages
#define IS_LONG_VEC(x) ((((VECSEXP) (x))->vecsxp.length) == -1) // IS_LONG_VEC used 1 times in RProtoBuf
#define IS_NUMERIC(x) (((x)->sxpinfo.type) == 14) // IS_NUMERIC used 57 times in 17 packages
#define IS_RAW(x) (((x)->sxpinfo.type) == 24) // IS_RAW used 3 times in digest, ROracle
#define IS_S4_OBJECT(x) ((x)->sxpinfo.gp & ((unsigned short)(1<<4))) // IS_S4_OBJECT used 23 times in Rmosek, Runuran, data.table, xts, Matrix, slam, zoo, HiPLARM, OpenMx, tau
#define IS_SCALAR(x, type) (((x)->sxpinfo.type) == (type) && (((VECSEXP) (x))->vecsxp.length) == 1) // IS_SCALAR unused
#define IS_SIMPLE_SCALAR(x, type) ((((x)->sxpinfo.type) == (type) && (((VECSEXP) (x))->vecsxp.length) == 1) && ((x)->attrib) == R_NilValue) // IS_SIMPLE_SCALAR unused
#define IS_VECTOR(x) Rf_isVector(x) // IS_VECTOR used 20 times in igraph, sprint, rggobi, catnet, RGtk2, sdnet
#define IndexWidth Rf_IndexWidth // IndexWidth unused
#define LANGSXP 6 // LANGSXP used 1276 times in 53 packages
#define LCONS(a, b) Rf_lcons((a), (b)) // LCONS used 212 times in 24 packages
#define LENGTH(x) (((((VECSEXP) (x))->vecsxp.length) == -1) ? R_BadLongVector(x, \"/var/folders/t8/1ry582nx6438y8pn6gk20f3c0000gn/T/preprocessor_test2759381993482855372.cpp\", 496) : (((VECSEXP) (x))->vecsxp.length)) // LENGTH used 5845 times in 356 packages
#define LEVELS(x) ((x)->sxpinfo.gp) // LEVELS used 18 times in rtdists, rPref, BsMD, data.table, stringi, dplyr, OBsMD, pbdZMQ, astrochron, RandomFields
#define LGLSXP 10 // LGLSXP used 1430 times in 166 packages
#define LISTSXP 2 // LISTSXP used 87 times in 21 packages
#define LISTVAL(x) ((x)->u.listsxp) // LISTVAL unused
#define LIST_POINTER(x) ((SEXP *) (((SEXPREC_ALIGN *) (x)) + 1)) // LIST_POINTER used 15 times in RPostgreSQL, rggobi, XML, AdaptFitOS, locfit, RMySQL, RGtk2
#define LIST_VALUE(x) Rf_error(\"the 'value' of a list object is not defined\") // LIST_VALUE unused
#define LOCAL_EVALUATOR // LOCAL_EVALUATOR used 11 times in rggobi, XML, ifultools, RGtk2
#define LOGICAL(x) ((int *) (((SEXPREC_ALIGN *) (x)) + 1)) // LOGICAL used 4473 times in 288 packages
#define LOGICAL_DATA(x) (((int *) (((SEXPREC_ALIGN *) (x)) + 1))) // LOGICAL_DATA used 114 times in excel.link, rggobi, XML, redland, RSNNS, kza, lazy, NMF, littler, RGtk2
#define LOGICAL_POINTER(x) ((int *) (((SEXPREC_ALIGN *) (x)) + 1)) // LOGICAL_POINTER used 144 times in 15 packages
#define LOGICAL_VALUE(x) Rf_asLogical(x) // LOGICAL_VALUE used 110 times in rphast, rtfbs, bigalgebra, subplex, GenABEL
#define LONG_VECTOR_SUPPORT // LONG_VECTOR_SUPPORT used 56 times in stringdist, matrixStats, RApiSerialize, Rhpc, pbdMPI, Rcpp11, Matrix
#define LONG_VEC_LENGTH(x) ((R_long_vec_hdr_t *) (x))[-1].lv_length // LONG_VEC_LENGTH used 1 times in Rcpp11
#define LONG_VEC_TRUELENGTH(x) ((R_long_vec_hdr_t *) (x))[-1].lv_truelength // LONG_VEC_TRUELENGTH unused
#define LOOP_WITH_INTERRUPT_CHECK(LOOP, ncheck, n, ...) do { for (size_t __intr_threshold__ = ncheck; 1; __intr_threshold__ += ncheck) { size_t __intr_end__ = n < __intr_threshold__ ? n : __intr_threshold__; LOOP(__intr_end__, ...); if (__intr_end__ == n) break; else R_CheckUserInterrupt(); } } while (0) // LOOP_WITH_INTERRUPT_CHECK unused
#define LTY_BLANK -1 // LTY_BLANK used 6 times in RSvgDevice, R2SWF, rvg, svglite
#define LTY_DASHED 4 + (4<<4) // LTY_DASHED used 4 times in qtutils, devEMF, RSvgDevice, rvg
#define LTY_DOTDASH 1 + (3<<4) + (4<<8) + (3<<12) // LTY_DOTDASH used 3 times in qtutils, devEMF, RSvgDevice
#define LTY_DOTTED 1 + (3<<4) // LTY_DOTTED used 4 times in qtutils, devEMF, RSvgDevice, rvg
#define LTY_LONGDASH 7 + (3<<4) // LTY_LONGDASH used 4 times in qtutils, devEMF, RSvgDevice, rvg
#define LTY_SOLID 0 // LTY_SOLID used 15 times in qtutils, devEMF, rscproxy, cairoDevice, Cairo, RSvgDevice, R2SWF, rvg, JavaGD, svglite
#define LTY_TWODASH 2 + (2<<4) + (6<<8) + (2<<12) // LTY_TWODASH used 2 times in qtutils, RSvgDevice
#define La_extern extern // La_extern unused
#define LibExport // LibExport used 2 times in hsmm
#define LibExtern extern // LibExtern used 4 times in rJava
#define LibImport // LibImport unused
#define MAKE_CLASS(what) R_do_MAKE_CLASS(what) // MAKE_CLASS used 231 times in 29 packages
#define MARK(x) ((x)->sxpinfo.mark) // MARK used 251 times in 21 packages
#define MARK_NOT_MUTABLE(x) (((x)->sxpinfo.named)=(2)) // MARK_NOT_MUTABLE unused
#define MAX_GRAPHICS_SYSTEMS 256 // MAX_GRAPHICS_SYSTEMS unused
#define MAX_NUM_SEXPTYPE (1<<5) // MAX_NUM_SEXPTYPE unused
#define MAYBE_REFERENCED(x) (! (((x)->sxpinfo.named) == 0)) // MAYBE_REFERENCED unused
#define MAYBE_SHARED(x) (((x)->sxpinfo.named) > 1) // MAYBE_SHARED unused
#define MESSAGE <defined> // MESSAGE used 172 times in 33 packages
#define MISSING(x) ((x)->sxpinfo.gp & 15) // MISSING used 125 times in 25 packages
#define MISSING_MASK 15 // MISSING_MASK used 10 times in rJPSGCS
#define MOD_ITERATE(n, n1, n2, i, i1, i2, loop_body) do { i = i1 = i2 = 0; do { for (; i < n; i1 = (++i1 == n1) ? 0 : i1, i2 = (++i2 == n2) ? 0 : i2, ++i) { loop_body } } while (0); } while (0) // MOD_ITERATE unused
#define MOD_ITERATE3(n, n1, n2, n3, i, i1, i2, i3, loop_body) do { i = i1 = i2 = i3 = 0; do { for (; i < n; i1 = (++i1 == n1) ? 0 : i1, i2 = (++i2 == n2) ? 0 : i2, i3 = (++i3 == n3) ? 0 : i3, ++i) { loop_body } } while (0); } while (0) // MOD_ITERATE3 unused
#define MOD_ITERATE3_CHECK(ncheck, n, n1, n2, n3, i, i1, i2, i3, loop_body) do { i = i1 = i2 = i3 = 0; do { for (size_t __intr_threshold__ = ncheck; 1; __intr_threshold__ += ncheck) { size_t __intr_end__ = n < __intr_threshold__ ? n : __intr_threshold__; do { for (; i < __intr_end__; i1 = (++i1 == n1) ? 0 : i1, i2 = (++i2 == n2) ? 0 : i2, i3 = (++i3 == n3) ? 0 : i3, ++i) { loop_body } } while (0); if (__intr_end__ == n) break; else R_CheckUserInterrupt(); } } while (0); } while (0) // MOD_ITERATE3_CHECK unused
#define MOD_ITERATE3_CORE(n, n1, n2, n3, i, i1, i2, i3, loop_body) do { for (; i < n; i1 = (++i1 == n1) ? 0 : i1, i2 = (++i2 == n2) ? 0 : i2, i3 = (++i3 == n3) ? 0 : i3, ++i) { loop_body } } while (0) // MOD_ITERATE3_CORE unused
#define MOD_ITERATE4(n, n1, n2, n3, n4, i, i1, i2, i3, i4, loop_body) do { i = i1 = i2 = i3 = i4 = 0; do { for (; i < n; i1 = (++i1 == n1) ? 0 : i1, i2 = (++i2 == n2) ? 0 : i2, i3 = (++i3 == n3) ? 0 : i3, i4 = (++i4 == n4) ? 0 : i4, ++i) { loop_body } } while (0); } while (0) // MOD_ITERATE4 unused
#define MOD_ITERATE4_CHECK(ncheck, n, n1, n2, n3, n4, i, i1, i2, i3, i4, loop_body) do { i = i1 = i2 = i3 = i4 = 0; do { for (size_t __intr_threshold__ = ncheck; 1; __intr_threshold__ += ncheck) { size_t __intr_end__ = n < __intr_threshold__ ? n : __intr_threshold__; do { for (; i < __intr_end__; i1 = (++i1 == n1) ? 0 : i1, i2 = (++i2 == n2) ? 0 : i2, i3 = (++i3 == n3) ? 0 : i3, i4 = (++i4 == n4) ? 0 : i4, ++i) { loop_body } } while (0); if (__intr_end__ == n) break; else R_CheckUserInterrupt(); } } while (0); } while (0) // MOD_ITERATE4_CHECK unused
#define MOD_ITERATE4_CORE(n, n1, n2, n3, n4, i, i1, i2, i3, i4, loop_body) do { for (; i < n; i1 = (++i1 == n1) ? 0 : i1, i2 = (++i2 == n2) ? 0 : i2, i3 = (++i3 == n3) ? 0 : i3, i4 = (++i4 == n4) ? 0 : i4, ++i) { loop_body } } while (0) // MOD_ITERATE4_CORE unused
#define MOD_ITERATE5(n, n1, n2, n3, n4, n5, i, i1, i2, i3, i4, i5, loop_body) do { i = i1 = i2 = i3 = i4 = i5 = 0; do { for (; i < n; i1 = (++i1 == n1) ? 0 : i1, i2 = (++i2 == n2) ? 0 : i2, i3 = (++i3 == n3) ? 0 : i3, i4 = (++i4 == n4) ? 0 : i4, i5 = (++i5 == n5) ? 0 : i5, ++i) { loop_body } } while (0); } while (0) // MOD_ITERATE5 unused
#define MOD_ITERATE5_CHECK(ncheck, n, n1, n2, n3, n4, n5, i, i1, i2, i3, i4, i5, loop_body) do { i = i1 = i2 = i3 = i4 = i5 = 0; do { for (size_t __intr_threshold__ = ncheck; 1; __intr_threshold__ += ncheck) { size_t __intr_end__ = n < __intr_threshold__ ? n : __intr_threshold__; do { for (; i < __intr_end__; i1 = (++i1 == n1) ? 0 : i1, i2 = (++i2 == n2) ? 0 : i2, i3 = (++i3 == n3) ? 0 : i3, i4 = (++i4 == n4) ? 0 : i4, i5 = (++i5 == n5) ? 0 : i5, ++i) { loop_body } } while (0); if (__intr_end__ == n) break; else R_CheckUserInterrupt(); } } while (0); } while (0) // MOD_ITERATE5_CHECK unused
#define MOD_ITERATE5_CORE(n, n1, n2, n3, n4, n5, i, i1, i2, i3, i4, i5, loop_body) do { for (; i < n; i1 = (++i1 == n1) ? 0 : i1, i2 = (++i2 == n2) ? 0 : i2, i3 = (++i3 == n3) ? 0 : i3, i4 = (++i4 == n4) ? 0 : i4, i5 = (++i5 == n5) ? 0 : i5, ++i) { loop_body } } while (0) // MOD_ITERATE5_CORE unused
#define MOD_ITERATE_CHECK(ncheck, n, n1, n2, i, i1, i2, loop_body) do { i = i1 = i2 = 0; do { for (size_t __intr_threshold__ = ncheck; 1; __intr_threshold__ += ncheck) { size_t __intr_end__ = n < __intr_threshold__ ? n : __intr_threshold__; do { for (; i < __intr_end__; i1 = (++i1 == n1) ? 0 : i1, i2 = (++i2 == n2) ? 0 : i2, ++i) { loop_body } } while (0); if (__intr_end__ == n) break; else R_CheckUserInterrupt(); } } while (0); } while (0) // MOD_ITERATE_CHECK unused
#define MOD_ITERATE_CORE(n, n1, n2, i, i1, i2, loop_body) do { for (; i < n; i1 = (++i1 == n1) ? 0 : i1, i2 = (++i2 == n2) ? 0 : i2, ++i) { loop_body } } while (0) // MOD_ITERATE_CORE unused
#define M_1_PI 0.318309886183790671537767526745028724 // M_1_PI used 42 times in SpatialExtremes, decon, mvabund, geoR, geoRglm, ExomeDepth, libamtrack, miRada, RandomFields, DescTools
#define M_1_SQRT_2PI 0.398942280401432677939946059934 // M_1_SQRT_2PI used 61 times in 23 packages
#define M_2PI 6.283185307179586476925286766559 // M_2PI used 106 times in 16 packages
#define M_2_PI 0.636619772367581343075535053490057448 // M_2_PI used 27 times in RandomFieldsUtils, dynaTree, ExomeDepth, RandomFields, svd, DescTools, spatstat
#define M_2_SQRTPI 1.12837916709551257389615890312154517 // M_2_SQRTPI used 6 times in excursions, PearsonDS, SpecsVerification, ExomeDepth
#define M_E 2.71828182845904523536028747135266250 // M_E used 40 times in Runuran, lamW, gmum.r, ExomeDepth, CEC, PoweR, TMB, Bmix, tgp, RcppShark
#define M_LN10 2.30258509299404568401799145468436421 // M_LN10 used 27 times in monomvn, rphast, secr, Runuran, rtfbs, PlayerRatings, ExomeDepth, spaMM, logistf, laGP
#define M_LN2 0.693147180559945309417232121458176568 // M_LN2 used 166 times in 30 packages
#define M_LN_2PI 1.837877066409345483560659472811 // M_LN_2PI used 4 times in OpenMx, MPSEM
#define M_LN_SQRT_2PI 0.918938533204672741780329736406 // M_LN_SQRT_2PI used 111 times in 31 packages
#define M_LN_SQRT_PI 0.572364942924700087071713675677 // M_LN_SQRT_PI used 29 times in 12 packages
#define M_LN_SQRT_PId2 0.225791352644727432363097614947 // M_LN_SQRT_PId2 used 9 times in MCMCpack, MasterBayes, phcfM, RandomFields, gof
#define M_LOG10E 0.434294481903251827651128918916605082 // M_LOG10E used 2 times in ExomeDepth
#define M_LOG10_2 0.301029995663981195213738894724 // M_LOG10_2 used 9 times in Bessel
#define M_LOG2E 1.44269504088896340735992468100189214 // M_LOG2E used 2 times in ExomeDepth
#define M_PI 3.14159265358979323846264338327950288 // M_PI used 1853 times in 207 packages
#define M_PI_2 1.57079632679489661923132169163975144 // M_PI_2 used 149 times in 28 packages
#define M_PI_4 0.785398163397448309615660845819875721 // M_PI_4 used 18 times in 12 packages
#define M_SQRT1_2 0.707106781186547524400844362104849039 // M_SQRT1_2 used 26 times in SpatialExtremes, gmwm, excursions, forecast, subrank, dplR, ExomeDepth, SpecsVerification
#define M_SQRT2 1.41421356237309504880168872420969808 // M_SQRT2 used 72 times in 23 packages
#define M_SQRT_2dPI 0.797884560802865355879892119869 // M_SQRT_2dPI used 2 times in SpatialExtremes, energy
#define M_SQRT_3 1.732050807568877293527446341506 // M_SQRT_3 used 4 times in poibin, SpatialExtremes, RandomFields, DescTools
#define M_SQRT_32 5.656854249492380195206754896838 // M_SQRT_32 used 10 times in MCMCpack, MasterBayes, rforensicbatwing, phcfM, gof
#define M_SQRT_PI 1.772453850905516027298167483341 // M_SQRT_PI used 31 times in SpatialExtremes, geoR, plugdensity, anchors, BayesBridge, copula, RandomFields, bda, DescTools
#define Memcpy(p,q,n) memcpy( p, q, (size_t)(n) * sizeof(*p) ) // Memcpy used 483 times in 32 packages
#define Memzero(p,n) memset(p, 0, (size_t)(n) * sizeof(*p)) // Memzero used 5 times in Matrix
#define NAMED(x) ((x)->sxpinfo.named) // NAMED used 62 times in 22 packages
#define NAMEDMAX 2 // NAMEDMAX unused
#define NA_INTEGER R_NaInt // NA_INTEGER used 1520 times in 183 packages
#define NA_LOGICAL R_NaInt // NA_LOGICAL used 355 times in 73 packages
#define NA_REAL R_NaReal // NA_REAL used 1667 times in 226 packages
#define NA_STRING R_NaString // NA_STRING used 574 times in 90 packages
#define NEW(class_def) R_do_new_object(class_def) // NEW used 1245 times in 153 packages
#define NEWSXP 30 // NEWSXP used 4 times in rtkpp, rtkore
#define NEW_CHARACTER(n) Rf_allocVector(16,n) // NEW_CHARACTER used 636 times in 49 packages
#define NEW_COMPLEX(n) Rf_allocVector(15,n) // NEW_COMPLEX used 3 times in igraph, ifs
#define NEW_INTEGER(n) Rf_allocVector(13,n) // NEW_INTEGER used 870 times in 94 packages
#define NEW_LIST(n) Rf_allocVector(19,n) // NEW_LIST used 532 times in 52 packages
#define NEW_LOGICAL(n) Rf_allocVector(10,n) // NEW_LOGICAL used 157 times in 38 packages
#define NEW_NUMERIC(n) Rf_allocVector(14,n) // NEW_NUMERIC used 1139 times in 112 packages
#define NEW_OBJECT(class_def) R_do_new_object(class_def) // NEW_OBJECT used 218 times in 25 packages
#define NEW_RAW(n) Rf_allocVector(24,n) // NEW_RAW used 9 times in RPostgreSQL, rggobi, ROracle, oce
#define NEW_STRING(n) Rf_allocVector(16,n) // NEW_STRING used 38 times in 11 packages
#define NILSXP 0 // NILSXP used 169 times in 44 packages
#define NORET __attribute__((noreturn)) // NORET unused
#define NOT_SHARED(x) (! (((x)->sxpinfo.named) > 1)) // NOT_SHARED unused
#define NO_REFERENCES(x) (((x)->sxpinfo.named) == 0) // NO_REFERENCES unused
#define NULL_ENTRY // NULL_ENTRY used 170 times in 12 packages
#define NULL_USER_OBJECT R_NilValue // NULL_USER_OBJECT used 8268 times in rggobi, XML, rjson, bigmemory, dbarts, lazy, RGtk2
#define NUMERIC_DATA(x) (((double *) (((SEXPREC_ALIGN *) (x)) + 1))) // NUMERIC_DATA used 71 times in excel.link, rggobi, XML, biganalytics, bigalgebra, bcp, RGtk2
#define NUMERIC_POINTER(x) ((double *) (((SEXPREC_ALIGN *) (x)) + 1)) // NUMERIC_POINTER used 2527 times in 101 packages
#define NUMERIC_VALUE(x) Rf_asReal(x) // NUMERIC_VALUE used 178 times in 25 packages
#define NewFrameConfirm Rf_NewFrameConfirm // NewFrameConfirm unused
#define NoDevices Rf_NoDevices // NoDevices used 1 times in tkrplot
#define NonNullStringMatch Rf_NonNullStringMatch // NonNullStringMatch used 8 times in proxy, arules, arulesSequences, cba
#define NumDevices Rf_NumDevices // NumDevices used 3 times in JavaGD
#define OBJECT(x) ((x)->sxpinfo.obj) // OBJECT used 102 times in 28 packages
#define PI 3.14159265358979323846264338327950288 // PI unused
#define PREXPR(e) R_PromiseExpr(e) // PREXPR used 4 times in igraph, lazyeval
#define PRINTNAME(x) ((x)->u.symsxp.pname) // PRINTNAME used 92 times in 29 packages
#define PROBLEM <defined> // PROBLEM used 861 times in 78 packages
#define PROMSXP 5 // PROMSXP used 43 times in 14 packages
#define PROTECT(s) Rf_protect(s) // PROTECT used 24686 times in 767 packages
#define PROTECT_WITH_INDEX(x,i) R_ProtectWithIndex(x,i) // PROTECT_WITH_INDEX used 91 times in 27 packages
#define PRTUTIL_H_ // PRTUTIL_H_ unused
#define PairToVectorList Rf_PairToVectorList // PairToVectorList used 7 times in cba, rcdd
#define PrintValue Rf_PrintValue // PrintValue used 119 times in 13 packages
#define QDFLAG_DISPLAY_LIST 0x0001 // QDFLAG_DISPLAY_LIST unused
#define QDFLAG_INTERACTIVE 0x0002 // QDFLAG_INTERACTIVE unused
#define QDFLAG_RASTERIZED 0x0004 // QDFLAG_RASTERIZED unused
#define QNPF_REDRAW 0x0001 // QNPF_REDRAW unused
#define QPFLAG_ANTIALIAS 0x0100 // QPFLAG_ANTIALIAS unused
#define QP_Flags_CFLoop 0x0001 // QP_Flags_CFLoop unused
#define QP_Flags_Cocoa 0x0002 // QP_Flags_Cocoa unused
#define QP_Flags_Front 0x0004 // QP_Flags_Front unused
#define QuartzParam_EmbeddingFlags \"embeddeding flags\" // QuartzParam_EmbeddingFlags unused
#define RAW(x) ((Rbyte *) (((SEXPREC_ALIGN *) (x)) + 1)) // RAW used 880 times in 99 packages
#define RAWSXP 24 // RAWSXP used 587 times in 92 packages
#define RAW_POINTER(x) ((Rbyte *) (((SEXPREC_ALIGN *) (x)) + 1)) // RAW_POINTER used 31 times in RPostgreSQL, CrypticIBDcheck, rggobi, seqinr, IRISSeismic, oce, RGtk2
#define RAW_VALUE(x) Rf_error(\"the 'value' of a raw object is not defined\") // RAW_VALUE unused
#define RDEBUG(x) ((x)->sxpinfo.debug) // RDEBUG used 69 times in rmetasim
#define REAL(x) ((double *) (((SEXPREC_ALIGN *) (x)) + 1)) // REAL used 30947 times in 687 packages
#define REALSXP 14 // REALSXP used 10171 times in 573 packages
#define RECOVER <defined> // RECOVER used 170 times in 14 packages
#define RECURSIVE_DATA(x) (((SEXP *) (((SEXPREC_ALIGN *) (x)) + 1))) // RECURSIVE_DATA used 5 times in XML
#define REFCNT(x) 0 // REFCNT unused
#define REFCNTMAX (4 - 1) // REFCNTMAX unused
#define REMBEDDED_H_ // REMBEDDED_H_ unused
#define REPROTECT(x,i) R_Reprotect(x,i) // REPROTECT used 130 times in 25 packages
#define RGBpar Rf_RGBpar // RGBpar used 3 times in Cairo, jpeg
#define RGBpar3 Rf_RGBpar3 // RGBpar3 unused
#define RINTERFACE_H_ // RINTERFACE_H_ unused
#define RMATH_H // RMATH_H used 1 times in phyclust
#define RSTEP(x) ((x)->sxpinfo.spare) // RSTEP unused
#define RTRACE(x) ((x)->sxpinfo.trace) // RTRACE unused
#define R_ALLOCATOR_TYPE // R_ALLOCATOR_TYPE unused
#define R_ALPHA(col) (((col)>>24)&255) // R_ALPHA used 35 times in 13 packages
#define R_APPLIC_H_ // R_APPLIC_H_ unused
#define R_ARITH_H_ // R_ARITH_H_ unused
#define R_BLAS_H // R_BLAS_H used 2 times in slam
#define R_BLUE(col) (((col)>>16)&255) // R_BLUE used 29 times in 12 packages
#define R_CALLBACKS_H // R_CALLBACKS_H unused
#define R_COMPLEX_H // R_COMPLEX_H used 1 times in uniqueAtomMat
#define R_CONNECTIONS_VERSION 1 // R_CONNECTIONS_VERSION used 3 times in curl, iotools
#define R_Calloc(n, t) (t *) R_chk_calloc( (size_t) (n), sizeof(t) ) // R_Calloc used 81 times in clpAPI, cplexAPI, poppr, rLindo, glpkAPI
#define R_CheckStack() do { void __attribute__((noreturn)) R_SignalCStackOverflow(intptr_t); int dummy; intptr_t usage = R_CStackDir * (R_CStackStart - (uintptr_t)&dummy); if(R_CStackLimit != -1 && usage > ((intptr_t) R_CStackLimit)) R_SignalCStackOverflow(usage); } while (0) // R_CheckStack used 115 times in vcrpart, actuar, cplm, lme4, Matrix, GNE, randtoolbox, HiPLARM, rngWELL, pedigreemm
#define R_DEFINES_H // R_DEFINES_H unused
#define R_ERROR_H_ // R_ERROR_H_ unused
#define R_EXT_BOOLEAN_H_ // R_EXT_BOOLEAN_H_ used 2 times in jpeg, Rcpp11
#define R_EXT_CONNECTIONS_H_ // R_EXT_CONNECTIONS_H_ unused
#define R_EXT_CONSTANTS_H_ // R_EXT_CONSTANTS_H_ unused
#define R_EXT_DYNLOAD_H_ // R_EXT_DYNLOAD_H_ unused
#define R_EXT_EVENTLOOP_H // R_EXT_EVENTLOOP_H unused
#define R_EXT_ITERMACROS_H_ // R_EXT_ITERMACROS_H_ unused
#define R_EXT_MATHTHREADS_H_ // R_EXT_MATHTHREADS_H_ unused
#define R_EXT_MEMORY_H_ // R_EXT_MEMORY_H_ unused
#define R_EXT_PARSE_H_ // R_EXT_PARSE_H_ used 2 times in Rserve
#define R_EXT_PRINT_H_ // R_EXT_PRINT_H_ used 6 times in spTDyn, spTimer
#define R_EXT_QUARTZDEVICE_H_ // R_EXT_QUARTZDEVICE_H_ unused
#define R_EXT_RALLOCATORS_H_ // R_EXT_RALLOCATORS_H_ unused
#define R_EXT_RSTARTUP_H_ // R_EXT_RSTARTUP_H_ unused
#define R_EXT_UTILS_H_ // R_EXT_UTILS_H_ unused
#define R_EXT_VISIBILITY_H_ // R_EXT_VISIBILITY_H_ unused
#define R_FINITE(x) R_finite(x) // R_FINITE used 1387 times in 145 packages
#define R_FTP_HTTP_H_ // R_FTP_HTTP_H_ unused
#define R_Free(p) (R_chk_free( (void *)(p) ), (p) = __null) // R_Free used 78 times in clpAPI, cplexAPI, poppr, glpkAPI
#define R_GE_version 10 // R_GE_version used 51 times in 12 packages
#define R_GRAPHICSDEVICE_H_ // R_GRAPHICSDEVICE_H_ unused
#define R_GRAPHICSENGINE_H_ // R_GRAPHICSENGINE_H_ unused
#define R_GREEN(col) (((col)>> 8)&255) // R_GREEN used 29 times in 12 packages
#define R_ICONV_H // R_ICONV_H unused
#define R_INLINE inline // R_INLINE used 330 times in 34 packages
#define R_INTERNALS_H_ // R_INTERNALS_H_ used 7 times in uniqueAtomMat, rtkpp, rtkore, spatstat
#define R_ITERATE(n, i, loop_body) do { i = 0; do { for (; i < n; ++i) { loop_body } } while (0); } while (0) // R_ITERATE unused
#define R_ITERATE_CHECK(ncheck, n, i, loop_body) do { i = 0; do { for (size_t __intr_threshold__ = ncheck; 1; __intr_threshold__ += ncheck) { size_t __intr_end__ = n < __intr_threshold__ ? n : __intr_threshold__; do { for (; i < __intr_end__; ++i) { loop_body } } while (0); if (__intr_end__ == n) break; else R_CheckUserInterrupt(); } } while (0); } while (0) // R_ITERATE_CHECK unused
#define R_ITERATE_CORE(n, i, loop_body) do { for (; i < n; ++i) { loop_body } } while (0) // R_ITERATE_CORE unused
#define R_LAPACK_H // R_LAPACK_H unused
#define R_LEN_T_MAX 2147483647 // R_LEN_T_MAX used 4 times in stringdist, matrixStats, FREGAT, Rcpp11
#define R_LINPACK_H_ // R_LINPACK_H_ unused
#define R_LONG_VEC_TOKEN -1 // R_LONG_VEC_TOKEN used 1 times in Rcpp11
#define R_OPAQUE(col) ((((col)>>24)&255) == 255) // R_OPAQUE used 6 times in devEMF, tikzDevice, cairoDevice
#define R_PROBLEM_BUFSIZE 4096 // R_PROBLEM_BUFSIZE unused
#define R_RANDOM_H // R_RANDOM_H unused
#define R_RCONFIG_H // R_RCONFIG_H unused
#define R_RED(col) (((col) )&255) // R_RED used 37 times in 12 packages
#define R_RGB(r,g,b) ((r)|((g)<<8)|((b)<<16)|0xFF000000) // R_RGB used 23 times in qtutils, rscproxy, cairoDevice, Cairo, jpeg, R2SWF, rvg, JavaGD, png, svglite
#define R_RGBA(r,g,b,a) ((r)|((g)<<8)|((b)<<16)|((a)<<24)) // R_RGBA used 6 times in Cairo, jpeg, png, showtext
#define R_RS_H // R_RS_H unused
#define R_R_H // R_R_H used 9 times in TMB, uniqueAtomMat, DatABEL, GenABEL, VariABEL
#define R_Realloc(p,n,t) (t *) R_chk_realloc( (void *)(p), (size_t)((n) * sizeof(t)) ) // R_Realloc used 3 times in poppr, seqminer, gpuR
#define R_SHORT_LEN_MAX 2147483647 // R_SHORT_LEN_MAX used 1 times in pbdMPI
#define R_STATS_PACKAGE_H // R_STATS_PACKAGE_H unused
#define R_S_H // R_S_H unused
#define R_TRANSPARENT(col) ((((col)>>24)&255) == 0) // R_TRANSPARENT used 16 times in qtutils, devEMF, tikzDevice, Cairo
#define R_TRANWHITE (((255)|((255)<<8)|((255)<<16)|((0)<<24))) // R_TRANWHITE used 6 times in qtutils, devEMF, rscproxy, cairoDevice, showtext
#define R_USE_PROTOTYPES 1 // R_USE_PROTOTYPES used 10 times in qtutils, rscproxy, tikzDevice, R2SWF, showtext
#define R_VERSION_STRING \"3.2.4\" // R_VERSION_STRING unused
#define R_XDR_DOUBLE_SIZE 8 // R_XDR_DOUBLE_SIZE used 2 times in rgdal
#define R_XDR_INTEGER_SIZE 4 // R_XDR_INTEGER_SIZE used 3 times in rgdal
#define R_XLEN_T_MAX 4503599627370496 // R_XLEN_T_MAX used 7 times in stringdist, Matrix, matrixStats, RApiSerialize, Rhpc
#define Realloc(p,n,t) (t *) R_chk_realloc( (void *)(p), (size_t)((n) * sizeof(t)) ) // Realloc used 244 times in 57 packages
#define S3Class Rf_S3Class // S3Class used 4 times in RInside, littler
#define S4SXP 25 // S4SXP used 71 times in 15 packages
#define S4_OBJECT_MASK ((unsigned short)(1<<4)) // S4_OBJECT_MASK unused
#define SETLENGTH(x,v) do { SEXP sl__x__ = (x); R_xlen_t sl__v__ = (v); if (((((VECSEXP) (sl__x__))->vecsxp.length) == -1)) (((R_long_vec_hdr_t *) (sl__x__))[-1].lv_length = (sl__v__)); else ((((VECSEXP) (sl__x__))->vecsxp.length) = ((R_len_t) sl__v__)); } while (0) // SETLENGTH used 65 times in 11 packages
#define SETLEVELS(x,v) (((x)->sxpinfo.gp)=((unsigned short)v)) // SETLEVELS used 2 times in Rcpp11
#define SET_ATTR(x, what, n) Rf_setAttrib(x, what, n) // SET_ATTR used 12 times in rphast, kergp, rtfbs, TPmsm, dbarts, PBSmapping
#define SET_CLASS(x, n) Rf_setAttrib(x, R_ClassSymbol, n) // SET_CLASS used 120 times in 19 packages
#define SET_DDVAL(x,v) ((v) ? (((x)->sxpinfo.gp) |= 1) : (((x)->sxpinfo.gp) &= ~1)) // SET_DDVAL unused
#define SET_DDVAL_BIT(x) (((x)->sxpinfo.gp) |= 1) // SET_DDVAL_BIT unused
#define SET_DIM(x, n) Rf_setAttrib(x, R_DimSymbol, n) // SET_DIM used 54 times in 18 packages
#define SET_DIMNAMES(x, n) Rf_setAttrib(x, R_DimNamesSymbol, n) // SET_DIMNAMES used 17 times in multic, lfe, pomp, subplex, TPmsm, cba
#define SET_ELEMENT(x, i, val) SET_VECTOR_ELT(x, i, val) // SET_ELEMENT used 344 times in 18 packages
#define SET_ENVFLAGS(x,v) (((x)->sxpinfo.gp)=(v)) // SET_ENVFLAGS unused
#define SET_LENGTH(x, n) (x = Rf_lengthgets(x, n)) // SET_LENGTH used 45 times in 12 packages
#define SET_LEVELS(x, l) Rf_setAttrib(x, R_LevelsSymbol, l) // SET_LEVELS used 9 times in cba, rggobi
#define SET_LONG_VEC_LENGTH(x,v) (((R_long_vec_hdr_t *) (x))[-1].lv_length = (v)) // SET_LONG_VEC_LENGTH unused
#define SET_LONG_VEC_TRUELENGTH(x,v) (((R_long_vec_hdr_t *) (x))[-1].lv_truelength = (v)) // SET_LONG_VEC_TRUELENGTH unused
#define SET_MISSING(x,v) do { SEXP __x__ = (x); int __v__ = (v); int __other_flags__ = __x__->sxpinfo.gp & ~15; __x__->sxpinfo.gp = __other_flags__ | __v__; } while (0) // SET_MISSING used 1 times in sprint
#define SET_NAMED(x, v) (((x)->sxpinfo.named)=(v)) // SET_NAMED used 10 times in dplyr, yaml, data.table, iotools, RSQLite
#define SET_NAMES(x, n) Rf_setAttrib(x, R_NamesSymbol, n) // SET_NAMES used 346 times in 37 packages
#define SET_OBJECT(x,v) (((x)->sxpinfo.obj)=(v)) // SET_OBJECT used 32 times in RSclient, reshape2, Rserve, data.table, actuar, dplyr, proxy, rmongodb, slam, tau
#define SET_RDEBUG(x,v) (((x)->sxpinfo.debug)=(v)) // SET_RDEBUG unused
#define SET_REFCNT(x,v) do {} while(0) // SET_REFCNT unused
#define SET_RSTEP(x,v) (((x)->sxpinfo.spare)=(v)) // SET_RSTEP unused
#define SET_RTRACE(x,v) (((x)->sxpinfo.trace)=(v)) // SET_RTRACE unused
#define SET_S4_OBJECT(x) (((x)->sxpinfo.gp) |= ((unsigned short)(1<<4))) // SET_S4_OBJECT used 12 times in RSclient, redland, Rserve, data.table, FREGAT, rJPSGCS, tau
#define SET_SHORT_VEC_LENGTH SET_SHORT_VEC_LENGTH // SET_SHORT_VEC_LENGTH unused
#define SET_SHORT_VEC_TRUELENGTH SET_SHORT_VEC_TRUELENGTH // SET_SHORT_VEC_TRUELENGTH unused
#define SET_SLOT(x, what, value) R_do_slot_assign(x, what, value) // SET_SLOT used 561 times in 32 packages
#define SET_TRACKREFS(x,v) do {} while(0) // SET_TRACKREFS unused
#define SET_TRUELENGTH(x,v) do { SEXP sl__x__ = (x); R_xlen_t sl__v__ = (v); if (((((VECSEXP) (sl__x__))->vecsxp.length) == -1)) (((R_long_vec_hdr_t *) (sl__x__))[-1].lv_truelength = (sl__v__)); else ((((VECSEXP) (sl__x__))->vecsxp.truelength) = ((R_len_t) sl__v__)); } while (0) // SET_TRUELENGTH used 26 times in data.table
#define SET_TYPEOF(x,v) (((x)->sxpinfo.type)=(v)) // SET_TYPEOF used 38 times in 21 packages
#define SEXPREC_HEADER <defined> // SEXPREC_HEADER unused
#define SHORT_VEC_LENGTH(x) (((VECSEXP) (x))->vecsxp.length) // SHORT_VEC_LENGTH used 1 times in Rcpp11
#define SHORT_VEC_TRUELENGTH(x) (((VECSEXP) (x))->vecsxp.truelength) // SHORT_VEC_TRUELENGTH unused
#define SINGLESXP 302 // SINGLESXP used 1 times in rgl
#define SINGLE_BASE 2 // SINGLE_BASE unused
#define SINGLE_EPS 1.19209290e-7F // SINGLE_EPS unused
#define SINGLE_XMAX 3.40282347e+38F // SINGLE_XMAX used 4 times in mapproj
#define SINGLE_XMIN 1.17549435e-38F // SINGLE_XMIN unused
#define SINT_MAX 2147483647 // SINT_MAX used 4 times in robust, AnalyzeFMRI
#define SINT_MIN (-2147483647 -1) // SINT_MIN used 2 times in robust
#define SIZEOF_SIZE_T 8 // SIZEOF_SIZE_T used 1 times in PythonInR
#define SPECIALSXP 7 // SPECIALSXP used 22 times in RPostgreSQL, PythonInR, Rcpp11, purrr, seqminer, Rcpp, yaml, pryr, rtkpp, rtkore
#define STRING_ELT(x,i) ((SEXP *) (((SEXPREC_ALIGN *) (x)) + 1))[i] // STRING_ELT used 4143 times in 333 packages
#define STRING_PTR(x) ((SEXP *) (((SEXPREC_ALIGN *) (x)) + 1)) // STRING_PTR used 65 times in 14 packages
#define STRING_VALUE(x) ((const char *) (((SEXPREC_ALIGN *) (Rf_asChar(x))) + 1)) // STRING_VALUE used 13 times in rggobi, XML, rgenoud, ParamHelpers, digest, lazy, RGtk2, SoDA, spatstat
#define STRSXP 16 // STRSXP used 3247 times in 327 packages
#define SUPPORT_MBCS 1 // SUPPORT_MBCS used 1 times in bibtex
#define SUPPORT_UTF8 1 // SUPPORT_UTF8 used 3 times in tau, rindex, stringi
#define SYMSXP 1 // SYMSXP used 94 times in 25 packages
#define SYMVALUE(x) ((x)->u.symsxp.value) // SYMVALUE unused
#define S_EVALUATOR // S_EVALUATOR used 66 times in 13 packages
#define Salloc(n,t) (t*)S_alloc(n, sizeof(t)) // Salloc used 299 times in logspline, multic, polspline, splusTimeDate, geoRglm, haplo.stats, tree, ibdreg, IDPmisc, robust
#define ScalarComplex Rf_ScalarComplex // ScalarComplex unused
#define ScalarInteger Rf_ScalarInteger // ScalarInteger used 704 times in 88 packages
#define ScalarLogical Rf_ScalarLogical // ScalarLogical used 450 times in 64 packages
#define ScalarRaw Rf_ScalarRaw // ScalarRaw used 4 times in qtbase, RGtk2
#define ScalarReal Rf_ScalarReal // ScalarReal used 330 times in 65 packages
#define ScalarString Rf_ScalarString // ScalarString used 198 times in 37 packages
#define Srealloc(p,n,old,t) (t*)S_realloc(p,n,old,sizeof(t)) // Srealloc unused
#define StdinActivity 2 // StdinActivity unused
#define StringBlank Rf_StringBlank // StringBlank unused
#define StringFalse Rf_StringFalse // StringFalse used 3 times in iotools
#define StringTrue Rf_StringTrue // StringTrue used 3 times in iotools
#define TAG(e) ((e)->u.listsxp.tagval) // TAG used 513 times in 40 packages
#define TRACKREFS(x) 0 // TRACKREFS unused
#define TRUE 1 // TRUE used 17978 times in 575 packages
#define TRUELENGTH(x) (((((VECSEXP) (x))->vecsxp.length) == -1) ? R_BadLongVector(x, \"/var/folders/t8/1ry582nx6438y8pn6gk20f3c0000gn/T/preprocessor_test2759381993482855372.cpp\", 1384) : (((VECSEXP) (x))->vecsxp.truelength)) // TRUELENGTH used 37 times in data.table
#define TYPEOF(x) ((x)->sxpinfo.type) // TYPEOF used 2832 times in 195 packages
#define TYPE_BITS 5 // TYPE_BITS used 2 times in dplyr
#define UNPROTECT(n) Rf_unprotect(n) // UNPROTECT used 12247 times in 758 packages
#define UNPROTECT_PTR(s) Rf_unprotect_ptr(s) // UNPROTECT_PTR used 307 times in 14 packages
#define UNSET_DDVAL_BIT(x) (((x)->sxpinfo.gp) &= ~1) // UNSET_DDVAL_BIT unused
#define UNSET_S4_OBJECT(x) (((x)->sxpinfo.gp) &= ~((unsigned short)(1<<4))) // UNSET_S4_OBJECT used 2 times in data.table, slam
#define USING_R // USING_R used 238 times in 29 packages
#define VECSXP 19 // VECSXP used 3142 times in 385 packages
#define VECTOR_DATA(x) (((SEXP *) (((SEXPREC_ALIGN *) (x)) + 1))) // VECTOR_DATA unused
#define VECTOR_ELT(x,i) ((SEXP *) (((SEXPREC_ALIGN *) (x)) + 1))[i] // VECTOR_ELT used 8626 times in 291 packages
#define VECTOR_PTR(x) ((SEXP *) (((SEXPREC_ALIGN *) (x)) + 1)) // VECTOR_PTR used 17 times in bit, AdaptFitOS, RJSONIO, Rcpp11, bit64, Rcpp, locfit, iotools
#define VectorIndex Rf_VectorIndex // VectorIndex used 6 times in gnmf
#define VectorToPairList Rf_VectorToPairList // VectorToPairList used 13 times in pomp, arules
#define WARN <defined> // WARN used 122 times in 20 packages
#define WARNING <defined> // WARNING used 957 times in 190 packages
#define WEAKREFSXP 23 // WEAKREFSXP used 19 times in seqminer, Rcpp, pryr, rtkpp, rtkore, Rcpp11
#define XActivity 1 // XActivity used 1 times in rgl
#define XLENGTH(x) (((((VECSEXP) (x))->vecsxp.length) == -1) ? ((R_long_vec_hdr_t *) (x))[-1].lv_length : (((VECSEXP) (x))->vecsxp.length)) // XLENGTH used 287 times in 21 packages
#define XTRUELENGTH(x) (((((VECSEXP) (x))->vecsxp.length) == -1) ? ((R_long_vec_hdr_t *) (x))[-1].lv_truelength : (((VECSEXP) (x))->vecsxp.truelength)) // XTRUELENGTH unused
#define __STDC_WANT_IEC_60559_FUNCS_EXT__ 1 // __STDC_WANT_IEC_60559_FUNCS_EXT__ unused
#define acopy_string Rf_acopy_string // acopy_string used 10 times in splusTimeDate
#define addMissingVarsToNewEnv Rf_addMissingVarsToNewEnv // addMissingVarsToNewEnv unused
#define alloc3DArray Rf_alloc3DArray // alloc3DArray used 21 times in mcmc, msm, TPmsm, unfoldr, RandomFields, cplm
#define allocArray Rf_allocArray // allocArray used 24 times in unfoldr, kergp, pomp, proxy, kza, slam, mvMORPH, TPmsm, ouch, RandomFields
#define allocFormalsList2 Rf_allocFormalsList2 // allocFormalsList2 unused
#define allocFormalsList3 Rf_allocFormalsList3 // allocFormalsList3 unused
#define allocFormalsList4 Rf_allocFormalsList4 // allocFormalsList4 unused
#define allocFormalsList5 Rf_allocFormalsList5 // allocFormalsList5 unused
#define allocFormalsList6 Rf_allocFormalsList6 // allocFormalsList6 unused
#define allocList Rf_allocList // allocList used 60 times in 25 packages
#define allocMatrix Rf_allocMatrix // allocMatrix used 1577 times in 244 packages
#define allocS4Object Rf_allocS4Object // allocS4Object used 1 times in arules
#define allocSExp Rf_allocSExp // allocSExp used 14 times in igraph, rgp, data.table, RandomFields, mmap, qtbase
#define allocVector Rf_allocVector // allocVector used 12419 times in 551 packages
#define allocVector3 Rf_allocVector3 // allocVector3 unused
#define any_duplicated Rf_any_duplicated // any_duplicated used 5 times in data.table, checkmate
#define any_duplicated3 Rf_any_duplicated3 // any_duplicated3 unused
#define applyClosure Rf_applyClosure // applyClosure unused
#define arraySubscript Rf_arraySubscript // arraySubscript used 13 times in proxy, arules, arulesSequences, cba, seriation
#define asChar Rf_asChar // asChar used 194 times in 36 packages
#define asCharacterFactor Rf_asCharacterFactor // asCharacterFactor used 11 times in fastmatch, Kmisc, data.table
#define asComplex Rf_asComplex // asComplex used 1 times in ff
#define asInteger Rf_asInteger // asInteger used 1277 times in 140 packages
#define asLogical Rf_asLogical // asLogical used 462 times in 64 packages
#define asReal Rf_asReal // asReal used 383 times in 83 packages
#define asS4 Rf_asS4 // asS4 unused
#define attribute_hidden // attribute_hidden used 170 times in 15 packages
#define attribute_visible // attribute_visible used 14 times in lfe, rgl, quadprog, data.table, chebpol, rstan, rmongodb, TPmsm, MonoPoly, bibtex
#define bessel_i Rf_bessel_i // bessel_i used 29 times in BiTrinA, Binarize, overlap, RCALI, Hankel, Rcpp11, rotations, Rcpp, moveHMM, dti
#define bessel_i_ex Rf_bessel_i_ex // bessel_i_ex used 5 times in Rcpp, Rcpp11, dti
#define bessel_j Rf_bessel_j // bessel_j used 25 times in SpatialExtremes, constrainedKriging, BH, Rcpp, RandomFields, Rcpp11
#define bessel_j_ex Rf_bessel_j_ex // bessel_j_ex used 4 times in Rcpp, Rcpp11
#define bessel_k Rf_bessel_k // bessel_k used 127 times in 26 packages
#define bessel_k_ex Rf_bessel_k_ex // bessel_k_ex used 9 times in geostatsp, Rcpp, tgp, Rcpp11
#define bessel_y Rf_bessel_y // bessel_y used 4 times in Rcpp, Rcpp11
#define bessel_y_ex Rf_bessel_y_ex // bessel_y_ex used 4 times in Rcpp, Rcpp11
#define beta Rf_beta // beta used 32773 times in 615 packages
#define cPsort Rf_cPsort // cPsort unused
#define call_S call_R // call_S used 2 times in locfit
#define choose Rf_choose // choose used 1368 times in 287 packages
#define classgets Rf_classgets // classgets used 91 times in 30 packages
#define coerceVector Rf_coerceVector // coerceVector used 2585 times in 167 packages
#define col2name Rf_col2name // col2name used 2 times in tikzDevice
#define conformable Rf_conformable // conformable used 141 times in 22 packages
#define cons Rf_cons // cons used 609 times in 39 packages
#define copyListMatrix Rf_copyListMatrix // copyListMatrix used 1 times in Matrix
#define copyMatrix Rf_copyMatrix // copyMatrix used 7 times in BDgraph, Matrix, kza
#define copyMostAttrib Rf_copyMostAttrib // copyMostAttrib used 68 times in arules, robustbase, data.table, xts, memisc, proxy, zoo, tau
#define copyVector Rf_copyVector // copyVector used 12 times in tm, kza, mlegp, adaptivetau
#define countContexts Rf_countContexts // countContexts unused
#define curDevice Rf_curDevice // curDevice used 4 times in qtutils, showtext, tkrplot
#define dbeta Rf_dbeta // dbeta used 377 times in 54 packages
#define dbinom Rf_dbinom // dbinom used 290 times in 40 packages
#define dbinom_raw Rf_dbinom_raw // dbinom_raw used 50 times in igraph, MCMCpack, secr, AdaptFitOS, phcfM, gof, MasterBayes, locfit
#define dcauchy Rf_dcauchy // dcauchy used 25 times in DPpackage, multimark, vcrpart, kernlab, Rcpp11, RInside, Rcpp, aucm, ordinal, littler
#define dchisq Rf_dchisq // dchisq used 57 times in 14 packages
#define defineVar Rf_defineVar // defineVar used 218 times in 38 packages
#define desc2GEDesc Rf_desc2GEDesc // desc2GEDesc used 5 times in Cairo, JavaGD, cairoDevice
#define dexp Rf_dexp // dexp used 646 times in 82 packages
#define df Rf_df // df unused
#define dgamma Rf_dgamma // dgamma used 617 times in 57 packages
#define dgeom Rf_dgeom // dgeom used 16 times in RInside, Rcpp, ergm.count, Rcpp11, littler
#define dhyper Rf_dhyper // dhyper used 14 times in AdaptFitOS, Rcpp11, RInside, Rcpp, CorrBin, locfit, littler
#define digamma Rf_digamma // digamma used 20689 times in 54 packages
#define dimgets Rf_dimgets // dimgets used 3 times in CorrBin
#define dimnamesgets Rf_dimnamesgets // dimnamesgets used 24 times in Matrix, RxCEcolInf, lxb, sapa
#define dlnorm Rf_dlnorm // dlnorm used 68 times in 22 packages
#define dlogis Rf_dlogis // dlogis used 91 times in 18 packages
#define dnbeta Rf_dnbeta // dnbeta used 6 times in Rcpp, Rcpp11
#define dnbinom Rf_dnbinom // dnbinom used 170 times in 27 packages
#define dnbinom_mu Rf_dnbinom_mu // dnbinom_mu used 18 times in RDS, KFAS, Rcpp11, unmarked, Rcpp, sspse, Bclim
#define dnchisq Rf_dnchisq // dnchisq used 7 times in spc, Rcpp, Rcpp11
#define dnf Rf_dnf // dnf used 13 times in RxODE, Rcpp, Rcpp11
#define dnorm Rf_dnorm4 // dnorm used 1377 times in 151 packages
#define dnorm4 Rf_dnorm4 // dnorm4 used 27 times in 11 packages
#define dnt Rf_dnt // dnt used 17 times in alineR, DNAtools, gmum.r, Rcpp11, Rcpp, bayesLife, spc
#define doKeybd Rf_doKeybd // doKeybd used 2 times in cairoDevice
#define doMouseEvent Rf_doMouseEvent // doMouseEvent used 6 times in cairoDevice
#define dpois Rf_dpois // dpois used 212 times in 37 packages
#define dpois_raw Rf_dpois_raw // dpois_raw used 25 times in igraph, MCMCpack, AdaptFitOS, phcfM, gof, MasterBayes, locfit
#define dpsifn Rf_dpsifn // dpsifn used 4 times in Rcpp, Rcpp11
#define dsignrank Rf_dsignrank // dsignrank used 7 times in RInside, Rcpp, fuzzyRankTests, Rcpp11, littler
#define dt Rf_dt // dt unused
#define dtukey Rf_dtukey // dtukey used 5 times in timereg, Rcpp, Rcpp11
#define dunif Rf_dunif // dunif used 120 times in 18 packages
#define duplicate Rf_duplicate // duplicate used 2088 times in 224 packages
#define duplicated Rf_duplicated // duplicated used 402 times in 100 packages
#define dweibull Rf_dweibull // dweibull used 38 times in 16 packages
#define dwilcox Rf_dwilcox // dwilcox used 12 times in clinfun, fuzzyRankTests, Rcpp11, RInside, Rcpp, DescTools, littler
#define elt Rf_elt // elt used 2310 times in 37 packages
#define error Rf_error // error used 63771 times in 1109 packages
#define error_return(msg) { Rf_error(msg); return R_NilValue; } // error_return used 100 times in rpg, RPostgreSQL, Rook, git2r, grr, rJava, rmumps
#define errorcall Rf_errorcall // errorcall used 103 times in RCurl, arules, XML, arulesSequences, pbdMPI, xts, proxy, cba, rJava, RSAP
#define errorcall_return(cl,msg) { Rf_errorcall(cl, msg); return R_NilValue; } // errorcall_return used 31 times in Runuran
#define eval Rf_eval // eval used 25178 times in 269 packages
#define findFun Rf_findFun // findFun used 13 times in sprint, tikzDevice, yaml, unfoldr, TraMineR, RGtk2
#define findVar Rf_findVar // findVar used 1333 times in 24 packages
#define findVarInFrame Rf_findVarInFrame // findVarInFrame used 101 times in 13 packages
#define findVarInFrame3 Rf_findVarInFrame3 // findVarInFrame3 used 5 times in datamap
#define fmax2 Rf_fmax2 // fmax2 used 345 times in 60 packages
#define fmin2 Rf_fmin2 // fmin2 used 224 times in 46 packages
#define formatComplex Rf_formatComplex // formatComplex unused
#define formatInteger Rf_formatInteger // formatInteger used 2 times in qtbase, RGtk2
#define formatLogical Rf_formatLogical // formatLogical used 2 times in qtbase, RGtk2
#define formatReal Rf_formatReal // formatReal used 5 times in data.table, qtbase, RGtk2
#define fprec Rf_fprec // fprec used 38 times in wfe, Rcpp, msm, list, Rcpp11
#define fromDeviceHeight GEfromDeviceHeight // fromDeviceHeight unused
#define fromDeviceWidth GEfromDeviceWidth // fromDeviceWidth unused
#define fromDeviceX GEfromDeviceX // fromDeviceX used 1 times in RSVGTipsDevice
#define fromDeviceY GEfromDeviceY // fromDeviceY unused
#define fround Rf_fround // fround used 13 times in bioPN, exactLoglinTest, frontiles, Rcpp11, FRESA.CAD, Rcpp, rmetasim, treethresh
#define fsign Rf_fsign // fsign used 66 times in 15 packages
#define ftrunc Rf_ftrunc // ftrunc used 123 times in 22 packages
#define gammafn Rf_gammafn // gammafn used 374 times in 46 packages
#define getAttrib Rf_getAttrib // getAttrib used 1930 times in 239 packages
#define getCharCE Rf_getCharCE // getCharCE used 16 times in ore, RSclient, PythonInR, Rserve, jsonlite, tau, rJava
#define gsetVar Rf_gsetVar // gsetVar used 4 times in RSVGTipsDevice, Cairo, RSvgDevice, JavaGD
#define iPsort Rf_iPsort // iPsort used 3 times in matrixStats, robustbase
#define imax2 Rf_imax2 // imax2 used 150 times in 37 packages
#define imin2 Rf_imin2 // imin2 used 193 times in 28 packages
#define inherits Rf_inherits // inherits used 814 times in 80 packages
#define install Rf_install // install used 3178 times in 224 packages
#define installChar Rf_installChar // installChar used 4 times in dplyr
#define installDDVAL Rf_installDDVAL // installDDVAL unused
#define installS3Signature Rf_installS3Signature // installS3Signature unused
#define isArray Rf_isArray // isArray used 34 times in checkmate, PythonInR, data.table, ifultools, Rblpapi, Rvcg, unfoldr, TMB, kza, qtbase
#define isBasicClass Rf_isBasicClass // isBasicClass unused
#define isBlankString Rf_isBlankString // isBlankString used 1 times in iotools
#define isByteCode(x) (((x)->sxpinfo.type)==21) // isByteCode unused
#define isComplex(s) (((s)->sxpinfo.type) == 15) // isComplex used 119 times in checkmate, PythonInR, ifultools, Rblpapi, Rcpp11, rmatio, stringi, Matrix, qtbase
#define isEnvironment(s) (((s)->sxpinfo.type) == 4) // isEnvironment used 113 times in 52 packages
#define isExpression(s) (((s)->sxpinfo.type) == 20) // isExpression used 3 times in PythonInR, Rcpp11
#define isFactor Rf_isFactor // isFactor used 42 times in checkmate, rggobi, PythonInR, data.table, Kmisc, partykit, cba, qtbase, RSQLite
#define isFrame Rf_isFrame // isFrame used 15 times in checkmate, splusTimeDate, OjaNP, PythonInR, data.table, robfilter
#define isFree Rf_isFree // isFree unused
#define isFunction Rf_isFunction // isFunction used 274 times in 43 packages
#define isInteger Rf_isInteger // isInteger used 402 times in 77 packages
#define isLanguage Rf_isLanguage // isLanguage used 63 times in PythonInR, rgp, RandomFields
#define isList Rf_isList // isList used 40 times in 11 packages
#define isLogical(s) (((s)->sxpinfo.type) == 10) // isLogical used 215 times in 53 packages
#define isMatrix Rf_isMatrix // isMatrix used 293 times in 65 packages
#define isNewList Rf_isNewList // isNewList used 103 times in 27 packages
#define isNull(s) (((s)->sxpinfo.type) == 0) // isNull used 1915 times in 119 packages
#define isNumber Rf_isNumber // isNumber used 14 times in PythonInR, readr, stringi, qtbase
#define isNumeric Rf_isNumeric // isNumeric used 468 times in 49 packages
#define isObject(s) (((s)->sxpinfo.obj) != 0) // isObject used 11 times in dplyr, Rcpp, PythonInR, Rcpp11, stringi, rmumps
#define isOrdered Rf_isOrdered // isOrdered used 65 times in partykit, PythonInR, data.table, RSQLite
#define isPairList Rf_isPairList // isPairList used 2 times in PythonInR
#define isPrimitive Rf_isPrimitive // isPrimitive used 7 times in PythonInR, qtbase
#define isReal(s) (((s)->sxpinfo.type) == 14) // isReal used 323 times in 64 packages
#define isS4 Rf_isS4 // isS4 used 13 times in PythonInR, Rcpp11, dplyr, Rcpp, catnet, rmumps, sdnet
#define isString(s) (((s)->sxpinfo.type) == 16) // isString used 280 times in 59 packages
#define isSymbol(s) (((s)->sxpinfo.type) == 1) // isSymbol used 68 times in PythonInR, data.table, Rcpp11, stringi, rgp, dbarts, rJava, sourcetools
#define isTs Rf_isTs // isTs used 2 times in PythonInR
#define isUnordered Rf_isUnordered // isUnordered used 2 times in PythonInR
#define isUnsorted Rf_isUnsorted // isUnsorted unused
#define isUserBinop Rf_isUserBinop // isUserBinop used 2 times in PythonInR
#define isValidString Rf_isValidString // isValidString used 26 times in SSN, PythonInR, foreign, pbdMPI, RJSONIO, SASxport
#define isValidStringF Rf_isValidStringF // isValidStringF used 2 times in PythonInR
#define isVector Rf_isVector // isVector used 182 times in 46 packages
#define isVectorAtomic Rf_isVectorAtomic // isVectorAtomic used 40 times in bit, matrixStats, checkmate, PythonInR, data.table, Matrix, bit64, potts, aster2, qtbase
#define isVectorList Rf_isVectorList // isVectorList used 12 times in RPostgreSQL, spsurvey, PythonInR, stringi, adaptivetau, PCICt, RandomFields
#define isVectorizable Rf_isVectorizable // isVectorizable used 3 times in PythonInR, robfilter
#define jump_to_toplevel Rf_jump_to_toplevel // jump_to_toplevel used 1 times in rJava
#define killDevice Rf_killDevice // killDevice used 3 times in tkrplot
#define lang1 Rf_lang1 // lang1 used 30 times in 11 packages
#define lang2 Rf_lang2 // lang2 used 216 times in 75 packages
#define lang3 Rf_lang3 // lang3 used 107 times in 28 packages
#define lang4 Rf_lang4 // lang4 used 65 times in 21 packages
#define lang5 Rf_lang5 // lang5 used 11 times in PBSddesolve, GNE, SMC
#define lang6 Rf_lang6 // lang6 used 2 times in GNE
#define lastElt Rf_lastElt // lastElt unused
#define lazy_duplicate Rf_lazy_duplicate // lazy_duplicate unused
#define lbeta Rf_lbeta // lbeta used 213 times in 23 packages
#define lchoose Rf_lchoose // lchoose used 54 times in 17 packages
#define lcons Rf_lcons // lcons used 16 times in rmgarch
#define leftButton 1 // leftButton unused
#define length(x) Rf_length(x) // length used 44060 times in 1224 packages
#define lengthgets Rf_lengthgets // lengthgets used 47 times in 11 packages
#define lgamma1p Rf_lgamma1p // lgamma1p used 14 times in Rcpp, OpenMx, ergm.count, heavy, mixAK, Rcpp11
#define lgammafn Rf_lgammafn // lgammafn used 407 times in 66 packages
#define lgammafn_sign Rf_lgammafn_sign // lgammafn_sign used 4 times in Rcpp, Rcpp11
#define list1 Rf_list1 // list1 used 197 times in 11 packages
#define list2 Rf_list2 // list2 used 441 times in 12 packages
#define list3 Rf_list3 // list3 used 72 times in marked, Rdsdp, BH, svd
#define list4 Rf_list4 // list4 used 58 times in igraph, PBSddesolve, Rserve, BH, yaml, treethresh, SMC
#define list5 Rf_list5 // list5 used 63 times in Rdsdp, BH
#define listAppend Rf_listAppend // listAppend used 1 times in ore
#define log1pmx Rf_log1pmx // log1pmx used 20 times in DPpackage, BH, Rcpp, Rcpp11
#define logspace_add Rf_logspace_add // logspace_add used 21 times in sna, BMN, Rcpp11, RxCEcolInf, SamplerCompare, STAR, Rcpp
#define logspace_sub Rf_logspace_sub // logspace_sub used 16 times in sna, Rcpp11, SamplerCompare, truncnorm, STAR, Rcpp, bfp
#define mainloop Rf_mainloop // mainloop unused
#define match Rf_match // match used 8773 times in 388 packages
#define matchE Rf_matchE // matchE unused
#define middleButton 2 // middleButton unused
#define mkChar Rf_mkChar // mkChar used 4545 times in 287 packages
#define mkCharCE Rf_mkCharCE // mkCharCE used 72 times in 15 packages
#define mkCharLen Rf_mkCharLen // mkCharLen used 38 times in 16 packages
#define mkCharLenCE Rf_mkCharLenCE // mkCharLenCE used 23 times in 11 packages
#define mkNamed Rf_mkNamed // mkNamed used 12 times in RCassandra, coxme, SamplerCompare, survival, JavaGD, DEoptim, qtbase
#define mkString Rf_mkString // mkString used 814 times in 96 packages
#define namesgets Rf_namesgets // namesgets used 80 times in 14 packages
#define ncols Rf_ncols // ncols used 3805 times in 182 packages
#define ndevNumber Rf_ndevNumber // ndevNumber used 11 times in Cairo, JavaGD, cairoDevice
#define nextDevice Rf_nextDevice // nextDevice used 3 times in rgl
#define nlevels Rf_nlevels // nlevels used 546 times in 26 packages
#define nrows Rf_nrows // nrows used 4332 times in 215 packages
#define nthcdr Rf_nthcdr // nthcdr used 9 times in sprint, rmongodb, PythonInR, xts
#define onintr Rf_onintr // onintr used 1 times in rJava
#define pbeta Rf_pbeta // pbeta used 262 times in 39 packages
#define pbeta_raw Rf_pbeta_raw // pbeta_raw used 10 times in MCMCpack, MasterBayes, Rcpp, phcfM, gof, Rcpp11
#define pbinom Rf_pbinom // pbinom used 53 times in 16 packages
#define pcauchy Rf_pcauchy // pcauchy used 25 times in DPpackage, vcrpart, Rcpp11, RInside, Rcpp, ordinal, RandomFields, littler
#define pchisq Rf_pchisq // pchisq used 152 times in 33 packages
#define pentagamma Rf_pentagamma // pentagamma used 8 times in Rcpp, Rcpp11
#define pexp Rf_pexp // pexp used 117 times in 26 packages
#define pf Rf_pf // pf unused
#define pgamma Rf_pgamma // pgamma used 164 times in 40 packages
#define pgeom Rf_pgeom // pgeom used 10 times in RInside, Rcpp, Rcpp11, littler
#define phyper Rf_phyper // phyper used 17 times in Runuran, Rcpp11, cpm, RInside, Rcpp, RandomFields, vegan, littler
#define plnorm Rf_plnorm // plnorm used 37 times in 14 packages
#define plogis Rf_plogis // plogis used 125 times in 21 packages
#define pmatch Rf_pmatch // pmatch used 169 times in ore, git2r, AdaptFitOS, data.table, seqminer, locfit, oce, rmumps
#define pnbeta Rf_pnbeta // pnbeta used 23 times in bayesSurv, Rcpp, Rcpp11
#define pnbinom Rf_pnbinom // pnbinom used 29 times in 13 packages
#define pnbinom_mu Rf_pnbinom_mu // pnbinom_mu used 3 times in Rcpp, Rcpp11
#define pnchisq Rf_pnchisq // pnchisq used 13 times in spc, Rcpp, Rcpp11
#define pnf Rf_pnf // pnf used 12 times in Rcpp, Rcpp11
#define pnorm Rf_pnorm5 // pnorm used 1582 times in 159 packages
#define pnorm5 Rf_pnorm5 // pnorm5 used 77 times in 12 packages
#define pnorm_both Rf_pnorm_both // pnorm_both used 12 times in MCMCpack, MasterBayes, Rcpp, phcfM, gof, Rcpp11
#define pnt Rf_pnt // pnt used 111 times in BayesXsrc, hypervolume, Rcpp, spc, Rcpp11
#define ppois Rf_ppois // ppois used 62 times in 18 packages
#define prevDevice Rf_prevDevice // prevDevice unused
#define printComplexVector Rf_printComplexVector // printComplexVector unused
#define printIntegerVector Rf_printIntegerVector // printIntegerVector used 2 times in bvpSolve, deTestSet
#define printRealVector Rf_printRealVector // printRealVector used 2 times in bvpSolve, deTestSet
#define protect Rf_protect // protect used 599 times in 101 packages
#define psigamma Rf_psigamma // psigamma used 9 times in Rcpp, Rcpp11
#define psignrank Rf_psignrank // psignrank used 11 times in FRESA.CAD, RInside, Rcpp, fuzzyRankTests, Rcpp11, littler
#define psmatch Rf_psmatch // psmatch used 5 times in rgl
#define pt Rf_pt // pt unused
#define ptukey Rf_ptukey // ptukey used 6 times in RInside, Rcpp, Rcpp11, littler
#define punif Rf_punif // punif used 70 times in 11 packages
#define pweibull Rf_pweibull // pweibull used 42 times in 14 packages
#define pwilcox Rf_pwilcox // pwilcox used 16 times in fuzzyRankTests, Rcpp11, FRESA.CAD, RInside, simctest, Rcpp, littler
#define pythag Rf_pythag // pythag used 105 times in 21 packages
#define qbeta Rf_qbeta // qbeta used 57 times in 17 packages
#define qbinom Rf_qbinom // qbinom used 18 times in DPpackage, Runuran, BayesXsrc, mvabund, Rcpp11, RInside, Rcpp, ump, littler
#define qcauchy Rf_qcauchy // qcauchy used 11 times in RInside, DPpackage, Rcpp, Rcpp11, littler
#define qchisq Rf_qchisq // qchisq used 38 times in 21 packages
#define qchisq_appr Rf_qchisq_appr // qchisq_appr used 2 times in Rcpp, Rcpp11
#define qexp Rf_qexp // qexp used 20 times in monomvn, GeoGenetix, Rcpp11, icenReg, RInside, TMB, Rcpp, Sunder, RandomFields, littler
#define qf Rf_qf // qf unused
#define qgamma Rf_qgamma // qgamma used 58 times in 25 packages
#define qgeom Rf_qgeom // qgeom used 10 times in RInside, Rcpp, Rcpp11, littler
#define qhyper Rf_qhyper // qhyper used 11 times in RInside, Runuran, Rcpp, Rcpp11, littler
#define qlnorm Rf_qlnorm // qlnorm used 11 times in icenReg, RInside, Rcpp, Rcpp11, littler
#define qlogis Rf_qlogis // qlogis used 16 times in DPpackage, geoBayes, Rcpp11, RInside, TMB, qrjoint, Rcpp, littler
#define qnbeta Rf_qnbeta // qnbeta used 8 times in Rcpp, Rcpp11
#define qnbinom Rf_qnbinom // qnbinom used 12 times in RInside, Runuran, Rcpp, mvabund, Rcpp11, littler
#define qnbinom_mu Rf_qnbinom_mu // qnbinom_mu used 3 times in Rcpp, Rcpp11
#define qnchisq Rf_qnchisq // qnchisq used 9 times in spc, Rcpp, Rcpp11
#define qnf Rf_qnf // qnf used 8 times in Rcpp, Rcpp11
#define qnorm Rf_qnorm5 // qnorm used 444 times in 96 packages
#define qnorm5 Rf_qnorm5 // qnorm5 used 30 times in igraph, PwrGSD, geepack, robustvarComp, Rcpp11, tpr, Rcpp
#define qnt Rf_qnt // qnt used 12 times in ore, Rcpp, spc, Rcpp11
#define qpois Rf_qpois // qpois used 23 times in 11 packages
#define qsignrank Rf_qsignrank // qsignrank used 6 times in RInside, Rcpp, Rcpp11, littler
#define qt Rf_qt // qt unused
#define qtukey Rf_qtukey // qtukey used 6 times in RInside, Rcpp, Rcpp11, littler
#define qunif Rf_qunif // qunif used 14 times in RInside, qrjoint, Rcpp, Rcpp11, littler
#define qweibull Rf_qweibull // qweibull used 16 times in BSquare, Rcpp11, icenReg, RInside, TMB, extWeibQuant, Rcpp, littler
#define qwilcox Rf_qwilcox // qwilcox used 10 times in RInside, Rcpp, Rcpp11, littler
#define rPsort Rf_rPsort // rPsort used 63 times in 15 packages
#define rbeta Rf_rbeta // rbeta used 431 times in 59 packages
#define rbinom Rf_rbinom // rbinom used 169 times in 50 packages
#define rcauchy Rf_rcauchy // rcauchy used 21 times in PoweR, RInside, Rcpp, DEoptim, Rcpp11, littler
#define rchisq Rf_rchisq // rchisq used 244 times in 54 packages
#define reEnc Rf_reEnc // reEnc used 3 times in PythonInR, RJSONIO
#define readS3VarsFromFrame Rf_readS3VarsFromFrame // readS3VarsFromFrame unused
#define revsort Rf_revsort // revsort used 60 times in 20 packages
#define rexp Rf_rexp // rexp used 224 times in 56 packages
#define rf Rf_rf // rf unused
#define rgamma Rf_rgamma // rgamma used 786 times in 104 packages
#define rgeom Rf_rgeom // rgeom used 25 times in BSquare, sna, ergm.count, Rcpp11, RInside, Rcpp, littler
#define rhyper Rf_rhyper // rhyper used 13 times in kSamples, RInside, Rcpp, Rcpp11, littler
#define rightButton 4 // rightButton unused
#define rlnorm Rf_rlnorm // rlnorm used 64 times in 18 packages
#define rlogis Rf_rlogis // rlogis used 32 times in MCMCpack, phcfM, gof, Rcpp11, MasterBayes, PoweR, RInside, Rcpp, littler
#define rmultinom Rf_rmultinom // rmultinom used 42 times in 18 packages
#define rnbeta Rf_rnbeta // rnbeta used 4 times in Rcpp, Rcpp11
#define rnbinom Rf_rnbinom // rnbinom used 41 times in 18 packages
#define rnbinom_mu Rf_rnbinom_mu // rnbinom_mu used 7 times in Rcpp, Rcpp11
#define rnchisq Rf_rnchisq // rnchisq used 11 times in Rcpp, Rcpp11
#define rnf Rf_rnf // rnf used 35 times in sem, Rcpp, Rcpp11
#define rnorm Rf_rnorm // rnorm used 1865 times in 198 packages
#define rnt Rf_rnt // rnt used 2 times in Rcpp, Rcpp11
#define rownamesgets Rf_rownamesgets // rownamesgets unused
#define rpois Rf_rpois // rpois used 157 times in 51 packages
#define rsignrank Rf_rsignrank // rsignrank used 11 times in RInside, Rcpp, Rcpp11, littler
#define rt Rf_rt // rt unused
#define rtukey Rf_rtukey // rtukey used 2 times in Rcpp, Rcpp11
#define runif Rf_runif // runif used 2810 times in 273 packages
#define rweibull Rf_rweibull // rweibull used 35 times in 12 packages
#define rwilcox Rf_rwilcox // rwilcox used 11 times in RInside, Rcpp, Rcpp11, littler
#define s_object SEXPREC // s_object used 18563 times in 11 packages
#define selectDevice Rf_selectDevice // selectDevice unused
#define setAttrib Rf_setAttrib // setAttrib used 1830 times in 251 packages
#define setIVector Rf_setIVector // setIVector unused
#define setRVector Rf_setRVector // setRVector used 3 times in RcppClassic, RcppClassicExamples
#define setSVector Rf_setSVector // setSVector unused
#define setVar Rf_setVar // setVar used 24 times in Rhpc, rscproxy, PythonInR, rgenoud, survival, gsl, littler, spatstat
#define shallow_duplicate Rf_shallow_duplicate // shallow_duplicate used 2 times in tmlenet, smint
#define sign Rf_sign // sign used 5291 times in 389 packages
#define str2type Rf_str2type // str2type used 1 times in RGtk2
#define stringPositionTr Rf_stringPositionTr // stringPositionTr unused
#define stringSuffix Rf_stringSuffix // stringSuffix unused
#define substitute Rf_substitute // substitute used 255 times in 56 packages
#define tetragamma Rf_tetragamma // tetragamma used 22 times in Rcpp, Rcpp11, RcppShark
#define toDeviceHeight GEtoDeviceHeight // toDeviceHeight unused
#define toDeviceWidth GEtoDeviceWidth // toDeviceWidth unused
#define toDeviceX GEtoDeviceX // toDeviceX used 1 times in RSVGTipsDevice
#define toDeviceY GEtoDeviceY // toDeviceY unused
#define topenv Rf_topenv // topenv unused
#define translateChar Rf_translateChar // translateChar used 59 times in 19 packages
#define translateChar0 Rf_translateChar0 // translateChar0 unused
#define translateCharUTF8 Rf_translateCharUTF8 // translateCharUTF8 used 66 times in 13 packages
#define trigamma Rf_trigamma // trigamma used 128 times in 24 packages
#define type2char Rf_type2char // type2char used 107 times in 12 packages
#define type2rstr Rf_type2rstr // type2rstr unused
#define type2str Rf_type2str // type2str used 3 times in Kmisc, yaml
#define type2str_nowarn Rf_type2str_nowarn // type2str_nowarn used 1 times in qrmtools
#define unprotect Rf_unprotect // unprotect used 110 times in 35 packages
#define unprotect_ptr Rf_unprotect_ptr // unprotect_ptr unused
#define warning Rf_warning // warning used 7679 times in 434 packages
#define warningcall Rf_warningcall // warningcall used 4 times in RInside, jsonlite, pbdMPI
#define warningcall_immediate Rf_warningcall_immediate // warningcall_immediate used 2 times in Runuran
#define xlength(x) Rf_xlength(x) // xlength used 186 times in stringdist, yuima, matrixStats, Rhpc, validate, checkmate, dplR, Rdsdp, pscl, DescTools
#define xlengthgets Rf_xlengthgets // xlengthgets unused
# /Users/ls/Source/git/fastr/com.oracle.truffle.r.native/include/R.h
typedef double Sfloat; // Sfloat used 440 times in AnalyzeFMRI, wavethresh, IGM.MEA, spatial, LS2W, robust, MASS, PBSmapping
typedef int Sint; // Sint used 2750 times in 48 packages
extern "C" {
void R_FlushConsole(void); // R_FlushConsole used 651 times in 78 packages
void R_ProcessEvents(void); // R_ProcessEvents used 275 times in 39 packages
}
# /Users/ls/Source/git/fastr/com.oracle.truffle.r.native/include/R_ext/Applic.h
extern "C" {
typedef void integr_fn(double *x, int n, void *ex);
void Rdqags(integr_fn f, void *ex, double *a, double *b, // Rdqags used 58 times in 18 packages
double *epsabs, double *epsrel,
double *result, double *abserr, int *neval, int *ier,
int *limit, int *lenw, int *last, int *iwork, double *work);
void Rdqagi(integr_fn f, void *ex, double *bound, int *inf, // Rdqagi used 33 times in 11 packages
double *epsabs, double *epsrel,
double *result, double *abserr, int *neval, int *ier,
int *limit, int *lenw, int *last,
int *iwork, double *work);
typedef double optimfn(int, double *, void *);
typedef void optimgr(int, double *, double *, void *);
void vmmin(int n, double *b, double *Fmin, // vmmin used 62 times in rstpm2, glmmML, RCPmod, SpeciesMix, nnet, fanc, pcaPP, dti, eha, MASS
optimfn fn, optimgr gr, int maxit, int trace,
int *mask, double abstol, double reltol, int nREPORT,
void *ex, int *fncount, int *grcount, int *fail);
void nmmin(int n, double *Bvec, double *X, double *Fmin, optimfn fn, // nmmin used 19 times in AnalyzeFMRI, rstpm2, forecast, phyclust, pcaPP, bda, eha, oce
int *fail, double abstol, double intol, void *ex,
double alpha, double bet, double gamm, int trace,
int *fncount, int maxit);
void cgmin(int n, double *Bvec, double *X, double *Fmin, // cgmin used 1 times in pcaPP
optimfn fn, optimgr gr,
int *fail, double abstol, double intol, void *ex,
int type, int trace, int *fncount, int *grcount, int maxit);
void lbfgsb(int n, int m, double *x, double *l, double *u, int *nbd, // lbfgsb used 34 times in Iboot, PoweR, geostatsp, glmmML, laGP, CorrBin, abn, dti, eha
double *Fmin, optimfn fn, optimgr gr, int *fail, void *ex,
double factr, double pgtol, int *fncount, int *grcount,
int maxit, char *msg, int trace, int nREPORT);
void samin(int n, double *pb, double *yb, optimfn fn, int maxit, // samin used 4 times in icenReg, rEDM, RcppEigen, pcaPP
int tmax, double ti, int trace, void *ex);
int findInterval(double *xt, int n, double x, // findInterval used 11 times in BSquare, DNAprofiles, unfoldr, chebpol, pomp, eco, protViz, PBSmapping, spatstat
Rboolean rightmost_closed, Rboolean all_inside, int ilo,
int *mflag);
void dqrqty_(double *x, int *n, int *k, double *qraux, // dqrqty_ unused
double *y, int *ny, double *qty);
void dqrqy_(double *x, int *n, int *k, double *qraux, // dqrqy_ unused
double *y, int *ny, double *qy);
void dqrcf_(double *x, int *n, int *k, double *qraux, // dqrcf_ used 1 times in TwoPhaseInd
double *y, int *ny, double *b, int *info);
void dqrrsd_(double *x, int *n, int *k, double *qraux, // dqrrsd_ unused
double *y, int *ny, double *rsd);
void dqrxb_(double *x, int *n, int *k, double *qraux, // dqrxb_ unused
double *y, int *ny, double *xb);
double R_pretty(double *lo, double *up, int *ndiv, int min_n, // R_pretty used 1 times in rgl
double shrink_sml, double high_u_fact[],
int eps_correction, int return_bounds);
typedef void (*fcn_p)(int, double *, double *, void *);
typedef void (*d2fcn_p)(int, int, double *, double *, void *);
void fdhess(int n, double *x, double fval, fcn_p fun, void *state, // fdhess used 16 times in sem, fArma, fracdiff
double *h, int nfd, double *step, double *f, int ndigit,
double *typx);
void optif9(int nr, int n, double *x, // optif9 used 17 times in sem, rstpm2, nlme, pcaPP
fcn_p fcn, fcn_p d1fcn, d2fcn_p d2fcn,
void *state, double *typsiz, double fscale, int method,
int iexp, int *msg, int ndigit, int itnlim, int iagflg,
int iahflg, double dlt, double gradtl, double stepmx,
double steptl, double *xpls, double *fpls, double *gpls,
int *itrmcd, double *a, double *wrk, int *itncnt);
void dqrdc2_(double *x, int *ldx, int *n, int *p, // dqrdc2_ used 4 times in earth, TwoPhaseInd
double *tol, int *rank,
double *qraux, int *pivot, double *work);
void dqrls_(double *x, int *n, int *p, double *y, int *ny, // dqrls_ used 8 times in DatABEL, GenABEL, VariABEL
double *tol, double *b, double *rsd,
double *qty, int *k,
int *jpvt, double *qraux, double *work);
}
# /Users/ls/Source/git/fastr/com.oracle.truffle.r.native/include/R_ext/Arith.h
extern "C" {
extern double R_NaN; // R_NaN used 469 times in 68 packages
extern double R_PosInf; // R_PosInf used 562 times in 112 packages
extern double R_NegInf; // R_NegInf used 699 times in 105 packages
extern double R_NaReal; // R_NaReal used 140 times in 34 packages
// NA_REAL used 1667 times in 226 packages
extern int R_NaInt; // R_NaInt used 58 times in 20 packages
// NA_LOGICAL used 355 times in 73 packages
// NA_INTEGER used 1520 times in 183 packages
int R_IsNA(double); // R_IsNA used 161 times in 40 packages
int R_IsNaN(double); // R_IsNaN used 75 times in 28 packages
int R_finite(double); // R_finite used 232 times in 44 packages
int R_isnancpp(double); // R_isnancpp used 8 times in igraph, PwrGSD
}
# /Users/ls/Source/git/fastr/com.oracle.truffle.r.native/include/R_ext/BLAS.h
extern "C" {
extern double
dasum_(const int *n, const double *dx, const int *incx);
extern void
daxpy_(const int *n, const double *alpha,
const double *dx, const int *incx,
double *dy, const int *incy);
extern void
dcopy_(const int *n, const double *dx, const int *incx,
double *dy, const int *incy);
extern double
ddot_(const int *n, const double *dx, const int *incx,
const double *dy, const int *incy);
extern double
dnrm2_(const int *n, const double *dx, const int *incx);
extern void
drot_(const int *n, double *dx, const int *incx,
double *dy, const int *incy, const double *c, const double *s);
extern void
drotg_(const double *a, const double *b, double *c, double *s);
extern void
drotm_(const int *n, double *dx, const int *incx,
double *dy, const int *incy, const double *dparam);
extern void
drotmg_(const double *dd1, const double *dd2, const double *dx1,
const double *dy1, double *param);
extern void
dscal_(const int *n, const double *alpha, double *dx, const int *incx);
extern void
dswap_(const int *n, double *dx, const int *incx,
double *dy, const int *incy);
extern int
idamax_(const int *n, const double *dx, const int *incx);
extern void
dgbmv_(const char *trans, const int *m, const int *n,
const int *kl,const int *ku,
const double *alpha, const double *a, const int *lda,
const double *x, const int *incx,
const double *Rf_beta, double *y, const int *incy);
extern void
dgemv_(const char *trans, const int *m, const int *n,
const double *alpha, const double *a, const int *lda,
const double *x, const int *incx, const double *Rf_beta,
double *y, const int *incy);
extern void
dsbmv_(const char *uplo, const int *n, const int *k,
const double *alpha, const double *a, const int *lda,
const double *x, const int *incx,
const double *Rf_beta, double *y, const int *incy);
extern void
dspmv_(const char *uplo, const int *n,
const double *alpha, const double *ap,
const double *x, const int *incx,
const double *Rf_beta, double *y, const int *incy);
extern void
dsymv_(const char *uplo, const int *n, const double *alpha,
const double *a, const int *lda,
const double *x, const int *incx,
const double *Rf_beta, double *y, const int *incy);
extern void
dtbmv_(const char *uplo, const char *trans,
const char *diag, const int *n, const int *k,
const double *a, const int *lda,
double *x, const int *incx);
extern void
dtpmv_(const char *uplo, const char *trans, const char *diag,
const int *n, const double *ap,
double *x, const int *incx);
extern void
dtrmv_(const char *uplo, const char *trans, const char *diag,
const int *n, const double *a, const int *lda,
double *x, const int *incx);
extern void
dtbsv_(const char *uplo, const char *trans,
const char *diag, const int *n, const int *k,
const double *a, const int *lda,
double *x, const int *incx);
extern void
dtpsv_(const char *uplo, const char *trans,
const char *diag, const int *n,
const double *ap, double *x, const int *incx);
extern void
dtrsv_(const char *uplo, const char *trans,
const char *diag, const int *n,
const double *a, const int *lda,
double *x, const int *incx);
extern void
dger_(const int *m, const int *n, const double *alpha,
const double *x, const int *incx,
const double *y, const int *incy,
double *a, const int *lda);
extern void
dsyr_(const char *uplo, const int *n, const double *alpha,
const double *x, const int *incx,
double *a, const int *lda);
extern void
dspr_(const char *uplo, const int *n, const double *alpha,
const double *x, const int *incx, double *ap);
extern void
dsyr2_(const char *uplo, const int *n, const double *alpha,
const double *x, const int *incx,
const double *y, const int *incy,
double *a, const int *lda);
extern void
dspr2_(const char *uplo, const int *n, const double *alpha,
const double *x, const int *incx,
const double *y, const int *incy, double *ap);
extern void
dgemm_(const char *transa, const char *transb, const int *m,
const int *n, const int *k, const double *alpha,
const double *a, const int *lda,
const double *b, const int *ldb,
const double *Rf_beta, double *c, const int *ldc);
extern void
dtrsm_(const char *side, const char *uplo,
const char *transa, const char *diag,
const int *m, const int *n, const double *alpha,
const double *a, const int *lda,
double *b, const int *ldb);
extern void
dtrmm_(const char *side, const char *uplo, const char *transa,
const char *diag, const int *m, const int *n,
const double *alpha, const double *a, const int *lda,
double *b, const int *ldb);
extern void
dsymm_(const char *side, const char *uplo, const int *m,
const int *n, const double *alpha,
const double *a, const int *lda,
const double *b, const int *ldb,
const double *Rf_beta, double *c, const int *ldc);
extern void
dsyrk_(const char *uplo, const char *trans,
const int *n, const int *k,
const double *alpha, const double *a, const int *lda,
const double *Rf_beta, double *c, const int *ldc);
extern void
dsyr2k_(const char *uplo, const char *trans,
const int *n, const int *k,
const double *alpha, const double *a, const int *lda,
const double *b, const int *ldb,
const double *Rf_beta, double *c, const int *ldc);
extern double
dcabs1_(double *z);
extern double
dzasum_(int *n, Rcomplex *zx, int *incx);
extern double
dznrm2_(int *n, Rcomplex *x, int *incx);
extern int
izamax_(int *n, Rcomplex *zx, int *incx);
extern void
zaxpy_(int *n, Rcomplex *za, Rcomplex *zx,
int *incx, Rcomplex *zy, int *incy);
extern void
zcopy_(int *n, Rcomplex *zx, int *incx,
Rcomplex *zy, int *incy);
extern Rcomplex
zdotc_(int *n,
Rcomplex *zx, int *incx, Rcomplex *zy, int *incy);
extern Rcomplex
zdotu_(int *n,
Rcomplex *zx, int *incx, Rcomplex *zy, int *incy);
extern void
zdrot_(int *n, Rcomplex *zx, int *incx, Rcomplex *zy,
int *incy, double *c, double *s);
extern void
zdscal_(int *n, double *da, Rcomplex *zx, int *incx);
extern void
zgbmv_(char *trans, int *m, int *n, int *kl,
int *ku, Rcomplex *alpha, Rcomplex *a, int *lda,
Rcomplex *x, int *incx, Rcomplex *Rf_beta, Rcomplex *y,
int *incy);
extern void
zgemm_(const char *transa, const char *transb, const int *m,
const int *n, const int *k, const Rcomplex *alpha,
const Rcomplex *a, const int *lda,
const Rcomplex *b, const int *ldb,
const Rcomplex *Rf_beta, Rcomplex *c, const int *ldc);
extern void
zgemv_(char *trans, int *m, int *n, Rcomplex *alpha,
Rcomplex *a, int *lda, Rcomplex *x, int *incx,
Rcomplex *Rf_beta, Rcomplex *y, int * incy);
extern void
zgerc_(int *m, int *n, Rcomplex *alpha, Rcomplex *x,
int *incx, Rcomplex *y, int *incy, Rcomplex *a, int *lda);
extern void
zgeru_(int *m, int *n, Rcomplex *alpha, Rcomplex *x,
int *incx, Rcomplex *y, int *incy, Rcomplex *a, int *lda);
extern void
zhbmv_(char *uplo, int *n, int *k, Rcomplex *alpha,
Rcomplex *a, int *lda, Rcomplex *x, int *incx,
Rcomplex *Rf_beta, Rcomplex *y, int *incy);
extern void
zhemm_(char *side, char *uplo, int *m, int *n,
Rcomplex *alpha, Rcomplex *a, int *lda, Rcomplex *b,
int *ldb, Rcomplex *Rf_beta, Rcomplex *c, int *ldc);
extern void
zhemv_(char *uplo, int *n, Rcomplex *alpha, Rcomplex *a,
int *lda, Rcomplex *x, int *incx, Rcomplex *Rf_beta,
Rcomplex *y, int *incy);
extern void
zher_(char *uplo, int *n, double *alpha, Rcomplex *x,
int *incx, Rcomplex *a, int *lda);
extern void
zher2_(char *uplo, int *n, Rcomplex *alpha, Rcomplex *x,
int *incx, Rcomplex *y, int *incy, Rcomplex *a, int *lda);
extern void
zher2k_(char *uplo, char *trans, int *n, int *k,
Rcomplex *alpha, Rcomplex *a, int *lda, Rcomplex *b,
int *ldb, double *Rf_beta, Rcomplex *c, int *ldc);
extern void
zherk_(char *uplo, char *trans, int *n, int *k,
double *alpha, Rcomplex *a, int *lda, double *Rf_beta,
Rcomplex *c, int *ldc);
extern void
zhpmv_(char *uplo, int *n, Rcomplex *alpha, Rcomplex *ap,
Rcomplex *x, int *incx, Rcomplex * Rf_beta, Rcomplex *y,
int *incy);
extern void
zhpr_(char *uplo, int *n, double *alpha,
Rcomplex *x, int *incx, Rcomplex *ap);
extern void
zhpr2_(char *uplo, int *n, Rcomplex *alpha, Rcomplex *x,
int *incx, Rcomplex *y, int *incy, Rcomplex *ap);
extern void
zrotg_(Rcomplex *ca, Rcomplex *cb, double *c, Rcomplex *s);
extern void
zscal_(int *n, Rcomplex *za, Rcomplex *zx, int *incx);
extern void
zswap_(int *n, Rcomplex *zx, int *incx, Rcomplex *zy, int *incy);
extern void
zsymm_(char *side, char *uplo, int *m, int *n,
Rcomplex *alpha, Rcomplex *a, int *lda, Rcomplex *b,
int *ldb, Rcomplex *Rf_beta, Rcomplex *c, int *ldc);
extern void
zsyr2k_(char *uplo, char *trans, int *n, int *k,
Rcomplex *alpha, Rcomplex *a, int *lda, Rcomplex *b,
int *ldb, Rcomplex *Rf_beta, Rcomplex *c, int *ldc);
extern void
zsyrk_(char *uplo, char *trans, int *n, int *k,
Rcomplex *alpha, Rcomplex *a, int *lda,
Rcomplex *Rf_beta, Rcomplex *c, int *ldc);
extern void
ztbmv_(char *uplo, char *trans, char *diag, int *n, int *k,
Rcomplex *a, int *lda, Rcomplex *x, int *incx);
extern void
ztbsv_(char *uplo, char *trans, char *diag, int *n, int *k,
Rcomplex *a, int *lda, Rcomplex *x, int *incx);
extern void
ztpmv_(char *uplo, char *trans, char *diag, int *n,
Rcomplex *ap, Rcomplex *x, int *incx);
extern void
ztpsv_(char *uplo, char *trans, char *diag, int *n,
Rcomplex *ap, Rcomplex *x, int *incx);
extern void
ztrmm_(char *side, char *uplo, char *transa, char *diag,
int *m, int *n, Rcomplex *alpha, Rcomplex *a,
int *lda, Rcomplex *b, int *ldb);
extern void
ztrmv_(char *uplo, char *trans, char *diag, int *n,
Rcomplex *a, int *lda, Rcomplex *x, int *incx);
extern void
ztrsm_(char *side, char *uplo, char *transa, char *diag,
int *m, int *n, Rcomplex *alpha, Rcomplex *a,
int *lda, Rcomplex *b, int *ldb);
extern void
ztrsv_(char *uplo, char *trans, char *diag, int *n,
Rcomplex *a, int *lda, Rcomplex *x, int *incx);
}
# /Users/ls/Source/git/fastr/com.oracle.truffle.r.native/include/R_ext/Boolean.h
extern "C" {
typedef enum { FALSE = 0, TRUE } Rboolean;
}
# /Users/ls/Source/git/fastr/com.oracle.truffle.r.native/include/R_ext/Callbacks.h
typedef Rboolean (*R_ToplevelCallback)(SEXP expr, SEXP value, Rboolean succeeded, Rboolean visible, void *);
typedef struct _ToplevelCallback R_ToplevelCallbackEl;
struct _ToplevelCallback {
R_ToplevelCallback cb;
void *data;
void (*finalizer)(void *data);
char *name;
R_ToplevelCallbackEl *next;
};
extern "C" {
Rboolean Rf_removeTaskCallbackByIndex(int id); // Rf_removeTaskCallbackByIndex unused
Rboolean Rf_removeTaskCallbackByName(const char *name); // Rf_removeTaskCallbackByName unused
SEXP R_removeTaskCallback(SEXP which); // R_removeTaskCallback unused
R_ToplevelCallbackEl* Rf_addTaskCallback(R_ToplevelCallback cb, void *data, void (*finalizer)(void *), const char *name, int *pos);
typedef struct _R_ObjectTable R_ObjectTable;
typedef Rboolean (*Rdb_exists)(const char * const name, Rboolean *canCache, R_ObjectTable *);
typedef SEXP (*Rdb_get)(const char * const name, Rboolean *canCache, R_ObjectTable *);
typedef int (*Rdb_remove)(const char * const name, R_ObjectTable *);
typedef SEXP (*Rdb_assign)(const char * const name, SEXP value, R_ObjectTable *);
typedef SEXP (*Rdb_objects)(R_ObjectTable *);
typedef Rboolean (*Rdb_canCache)(const char * const name, R_ObjectTable *);
typedef void (*Rdb_onDetach)(R_ObjectTable *);
typedef void (*Rdb_onAttach)(R_ObjectTable *);
struct _R_ObjectTable{
int type;
char **cachedNames;
Rboolean active;
Rdb_exists exists;
Rdb_get get;
Rdb_remove remove;
Rdb_assign assign;
Rdb_objects objects;
Rdb_canCache canCache;
Rdb_onDetach onDetach;
Rdb_onAttach onAttach;
void *privateData;
};
}
# /Users/ls/Source/git/fastr/com.oracle.truffle.r.native/include/R_ext/Complex.h
extern "C" {
typedef struct {
double r;
double i;
} Rcomplex; // Rcomplex used 893 times in 47 packages
}
# /Users/ls/Source/git/fastr/com.oracle.truffle.r.native/include/R_ext/Connections.h
typedef struct Rconn *Rconnection;
struct Rconn {
char* class;
char* description;
int enc;
char mode[5];
Rboolean text, isopen, incomplete, canread, canwrite, canseek, blocking,
isGzcon;
Rboolean (*open)(struct Rconn *);
void (*close)(struct Rconn *);
void (*destroy)(struct Rconn *);
int (*vfprintf)(struct Rconn *, const char *, va_list);
int (*fgetc)(struct Rconn *);
int (*fgetc_internal)(struct Rconn *);
double (*seek)(struct Rconn *, double, int, int);
void (*truncate)(struct Rconn *);
int (*fflush)(struct Rconn *);
size_t (*read)(void *, size_t, size_t, struct Rconn *);
size_t (*write)(const void *, size_t, size_t, struct Rconn *);
int nPushBack, posPushBack;
char **PushBack;
int save, save2;
char encname[101];
void *inconv, *outconv;
char iconvbuff[25], oconvbuff[50], *next, init_out[25];
short navail, inavail;
Rboolean EOF_signalled;
Rboolean UTF8out;
void *id;
void *ex_ptr;
void *private;
int status;
};
extern "C" {
SEXP R_new_custom_connection(const char *description, const char *mode, const char *class_name, Rconnection *ptr); // R_new_custom_connection used 2 times in curl, rredis
size_t R_ReadConnection(Rconnection con, void *buf, size_t n); // R_ReadConnection used 1 times in iotools
size_t R_WriteConnection(Rconnection con, void *buf, size_t n); // R_WriteConnection used 4 times in Cairo
}
# /Users/ls/Source/git/fastr/com.oracle.truffle.r.native/include/R_ext/Error.h
extern "C" {
void __attribute__((noreturn)) Rf_error(const char *, ...);
void __attribute__((noreturn)) UNIMPLEMENTED(const char *);
void __attribute__((noreturn)) WrongArgCount(const char *);
void Rf_warning(const char *, ...); // Rf_warning used 316 times in 66 packages
// warning used 7679 times in 434 packages
void R_ShowMessage(const char *s); // R_ShowMessage used 104 times in Rserve, rJava, HiPLARM
}
# /Users/ls/Source/git/fastr/com.oracle.truffle.r.native/include/R_ext/GetX11Image.h
extern "C" {
Rboolean R_GetX11Image(int d, void *pximage, int *pwidth, int *pheight); // R_GetX11Image used 1 times in tkrplot
}
# /Users/ls/Source/git/fastr/com.oracle.truffle.r.native/include/R_ext/GraphicsDevice.h
extern "C" {
typedef struct _DevDesc DevDesc;
typedef DevDesc* pDevDesc;
struct _DevDesc {
double left;
double right;
double bottom;
double top;
double clipLeft;
double clipRight;
double clipBottom;
double clipTop;
double xCharOffset;
double yCharOffset;
double yLineBias;
double ipr[2];
double cra[2];
double gamma;
Rboolean canClip;
Rboolean canChangeGamma;
int canHAdj;
double startps;
int startcol;
int startfill;
int startlty;
int startfont;
double startgamma;
void *deviceSpecific;
Rboolean displayListOn;
Rboolean canGenMouseDown;
Rboolean canGenMouseMove;
Rboolean canGenMouseUp;
Rboolean canGenKeybd;
Rboolean gettingEvent;
void (*activate)(const pDevDesc );
void (*circle)(double x, double y, double r, const pGEcontext gc, pDevDesc dd);
void (*clip)(double x0, double x1, double y0, double y1, pDevDesc dd);
void (*close)(pDevDesc dd);
void (*deactivate)(pDevDesc );
Rboolean (*locator)(double *x, double *y, pDevDesc dd);
void (*line)(double x1, double y1, double x2, double y2,
const pGEcontext gc, pDevDesc dd);
void (*metricInfo)(int c, const pGEcontext gc,
double* ascent, double* descent, double* width,
pDevDesc dd);
void (*mode)(int mode, pDevDesc dd);
void (*newPage)(const pGEcontext gc, pDevDesc dd);
void (*polygon)(int n, double *x, double *y, const pGEcontext gc, pDevDesc dd);
void (*polyline)(int n, double *x, double *y, const pGEcontext gc, pDevDesc dd);
void (*rect)(double x0, double y0, double x1, double y1,
const pGEcontext gc, pDevDesc dd);
void (*path)(double *x, double *y,
int npoly, int *nper,
Rboolean winding,
const pGEcontext gc, pDevDesc dd);
void (*raster)(unsigned int *raster, int w, int h,
double x, double y,
double width, double height,
double rot,
Rboolean interpolate,
const pGEcontext gc, pDevDesc dd);
SEXP (*cap)(pDevDesc dd);
void (*size)(double *left, double *right, double *bottom, double *top,
pDevDesc dd);
double (*strWidth)(const char *str, const pGEcontext gc, pDevDesc dd);
void (*text)(double x, double y, const char *str, double rot,
double hadj, const pGEcontext gc, pDevDesc dd);
void (*onExit)(pDevDesc dd);
SEXP (*getEvent)(SEXP, const char *);
Rboolean (*newFrameConfirm)(pDevDesc dd);
Rboolean hasTextUTF8;
void (*textUTF8)(double x, double y, const char *str, double rot,
double hadj, const pGEcontext gc, pDevDesc dd);
double (*strWidthUTF8)(const char *str, const pGEcontext gc, pDevDesc dd);
Rboolean wantSymbolUTF8;
Rboolean useRotatedTextInContour;
SEXP eventEnv; // eventEnv used 3 times in cairoDevice, R2SWF
void (*eventHelper)(pDevDesc dd, int code);
int (*holdflush)(pDevDesc dd, int level);
int haveTransparency;
int haveTransparentBg;
int haveRaster;
int haveCapture, haveLocator;
char reserved[64];
};
int Rf_ndevNumber(pDevDesc ); // Rf_ndevNumber unused
// ndevNumber used 11 times in Cairo, JavaGD, cairoDevice
int Rf_NumDevices(void); // Rf_NumDevices unused
// NumDevices used 3 times in JavaGD
void R_CheckDeviceAvailable(void); // R_CheckDeviceAvailable used 14 times in 12 packages
Rboolean R_CheckDeviceAvailableBool(void); // R_CheckDeviceAvailableBool unused
int Rf_curDevice(void); // Rf_curDevice unused
// curDevice used 4 times in qtutils, showtext, tkrplot
int Rf_nextDevice(int); // Rf_nextDevice unused
// nextDevice used 3 times in rgl
int Rf_prevDevice(int); // Rf_prevDevice unused
// prevDevice unused
int Rf_selectDevice(int); // Rf_selectDevice unused
// selectDevice unused
void Rf_killDevice(int); // Rf_killDevice unused
// killDevice used 3 times in tkrplot
int Rf_NoDevices(void); // Rf_NoDevices unused
// NoDevices used 1 times in tkrplot
void Rf_NewFrameConfirm(pDevDesc); // Rf_NewFrameConfirm unused
// NewFrameConfirm unused
typedef enum {knUNKNOWN = -1,
knLEFT = 0, knUP, knRIGHT, knDOWN,
knF1, knF2, knF3, knF4, knF5, knF6, knF7, knF8, knF9, knF10,
knF11, knF12,
knPGUP, knPGDN, knEND, knHOME, knINS, knDEL} R_KeyName;
typedef enum {meMouseDown = 0,
meMouseUp,
meMouseMove} R_MouseEvent;
void Rf_doMouseEvent(pDevDesc dd, R_MouseEvent event, // Rf_doMouseEvent unused
// doMouseEvent used 6 times in cairoDevice
int buttons, double x, double y);
void Rf_doKeybd(pDevDesc dd, R_KeyName rkey, // Rf_doKeybd unused
// doKeybd used 2 times in cairoDevice
const char *keyname);
extern Rboolean R_interrupts_suspended; // R_interrupts_suspended unused
extern int R_interrupts_pending; // R_interrupts_pending used 6 times in igraph, rJava
extern void Rf_onintr(void); // Rf_onintr used 216 times in 12 packages
// onintr used 1 times in rJava
extern Rboolean mbcslocale; // mbcslocale used 7 times in qtutils, RCurl, cairoDevice, Cairo, RSvgDevice, PCICt
extern void *Rf_AdobeSymbol2utf8(char*out, const char *in, size_t nwork); // Rf_AdobeSymbol2utf8 unused
// AdobeSymbol2utf8 used 2 times in Cairo
extern size_t Rf_ucstoutf8(char *s, const unsigned int c); // Rf_ucstoutf8 used 7 times in cairoDevice, Cairo, rvg, svglite
}
# /Users/ls/Source/git/fastr/com.oracle.truffle.r.native/include/R_ext/GraphicsEngine.h
extern "C" {
int R_GE_getVersion(void); // R_GE_getVersion unused
void R_GE_checkVersionOrDie(int version); // R_GE_checkVersionOrDie used 11 times in qtutils, rscproxy, cairoDevice, RSvgDevice, R2SWF, rvg, RSVGTipsDevice, tikzDevice, svglite
typedef enum {
GE_DEVICE = 0,
GE_NDC = 1,
GE_INCHES = 2,
GE_CM = 3
} GEUnit; // GEUnit unused
typedef enum {
GE_InitState = 0,
GE_FinaliseState = 1,
GE_SaveState = 2,
GE_RestoreState = 6,
GE_CopyState = 3,
GE_SaveSnapshotState = 4,
GE_RestoreSnapshotState = 5,
GE_CheckPlot = 7,
GE_ScalePS = 8
} GEevent; // GEevent unused
typedef enum {
GE_ROUND_CAP = 1,
GE_BUTT_CAP = 2,
GE_SQUARE_CAP = 3
} R_GE_lineend; // R_GE_lineend used 3 times in qtutils, Cairo
typedef enum {
GE_ROUND_JOIN = 1,
GE_MITRE_JOIN = 2,
GE_BEVEL_JOIN = 3
} R_GE_linejoin; // R_GE_linejoin used 3 times in qtutils, Cairo
typedef struct {
int col;
int fill;
double gamma;
double lwd;
int lty;
R_GE_lineend lend;
R_GE_linejoin ljoin;
double lmitre;
double cex;
double ps;
double lineheight;
int fontface;
char fontfamily[201];
} R_GE_gcontext; // R_GE_gcontext used 87 times in qtutils, Cairo, RSvgDevice, rvg, RSVGTipsDevice, JavaGD, showtext
typedef R_GE_gcontext* pGEcontext;
typedef struct _GEDevDesc GEDevDesc;
typedef SEXP (* GEcallback)(GEevent, GEDevDesc *, SEXP);
typedef struct {
void *systemSpecific;
GEcallback callback;
} GESystemDesc; // GESystemDesc unused
struct _GEDevDesc {
pDevDesc dev;
Rboolean displayListOn;
SEXP displayList; // displayList used 30 times in qtutils, rgl, Cairo, JavaGD, R2SWF
SEXP DLlastElt; // DLlastElt unused
SEXP savedSnapshot; // savedSnapshot used 4 times in qtutils, Cairo, JavaGD
Rboolean dirty;
Rboolean recordGraphics;
GESystemDesc *gesd[256];
Rboolean ask;
};
typedef GEDevDesc* pGEDevDesc;
pGEDevDesc Rf_desc2GEDesc(pDevDesc dd); // Rf_desc2GEDesc unused
// desc2GEDesc used 5 times in Cairo, JavaGD, cairoDevice
int GEdeviceNumber(pGEDevDesc); // GEdeviceNumber used 4 times in Cairo, JavaGD
pGEDevDesc GEgetDevice(int); // GEgetDevice used 20 times in tikzDevice, Cairo, JavaGD, rvg, showtext
void GEaddDevice(pGEDevDesc); // GEaddDevice used 4 times in Cairo, JavaGD
void GEaddDevice2(pGEDevDesc, const char *); // GEaddDevice2 used 12 times in qtutils, devEMF, rscproxy, cairoDevice, RSvgDevice, R2SWF, rvg, RSVGTipsDevice, tikzDevice, svglite
void GEaddDevice2f(pGEDevDesc, const char *, const char *); // GEaddDevice2f unused
void GEkillDevice(pGEDevDesc); // GEkillDevice used 4 times in Cairo, JavaGD, cairoDevice
pGEDevDesc GEcreateDevDesc(pDevDesc dev); // GEcreateDevDesc used 14 times in 12 packages
void GEdestroyDevDesc(pGEDevDesc dd); // GEdestroyDevDesc unused
void *GEsystemState(pGEDevDesc dd, int index); // GEsystemState unused
void GEregisterWithDevice(pGEDevDesc dd); // GEregisterWithDevice unused
void GEregisterSystem(GEcallback callback, int *systemRegisterIndex); // GEregisterSystem unused
void GEunregisterSystem(int registerIndex); // GEunregisterSystem unused
SEXP GEhandleEvent(GEevent event, pDevDesc dev, SEXP data); // GEhandleEvent unused
double GEfromDeviceX(double value, GEUnit to, pGEDevDesc dd); // GEfromDeviceX unused
// fromDeviceX used 1 times in RSVGTipsDevice
double GEtoDeviceX(double value, GEUnit from, pGEDevDesc dd); // GEtoDeviceX unused
// toDeviceX used 1 times in RSVGTipsDevice
double GEfromDeviceY(double value, GEUnit to, pGEDevDesc dd); // GEfromDeviceY unused
// fromDeviceY unused
double GEtoDeviceY(double value, GEUnit from, pGEDevDesc dd); // GEtoDeviceY unused
// toDeviceY unused
double GEfromDeviceWidth(double value, GEUnit to, pGEDevDesc dd); // GEfromDeviceWidth unused
// fromDeviceWidth unused
double GEtoDeviceWidth(double value, GEUnit from, pGEDevDesc dd); // GEtoDeviceWidth unused
// toDeviceWidth unused
double GEfromDeviceHeight(double value, GEUnit to, pGEDevDesc dd); // GEfromDeviceHeight unused
// fromDeviceHeight unused
double GEtoDeviceHeight(double value, GEUnit from, pGEDevDesc dd); // GEtoDeviceHeight unused
// toDeviceHeight unused
typedef unsigned int rcolor;
rcolor Rf_RGBpar(SEXP, int); // Rf_RGBpar unused
// RGBpar used 3 times in Cairo, jpeg
rcolor Rf_RGBpar3(SEXP, int, rcolor); // Rf_RGBpar3 unused
// RGBpar3 unused
const char *Rf_col2name(rcolor col); // Rf_col2name unused
// col2name used 2 times in tikzDevice
rcolor R_GE_str2col(const char *s); // R_GE_str2col used 13 times in devEMF, RSVGTipsDevice, tikzDevice, RSvgDevice, rvg, svglite
R_GE_lineend GE_LENDpar(SEXP value, int ind); // GE_LENDpar unused
SEXP GE_LENDget(R_GE_lineend lend); // GE_LENDget unused
R_GE_linejoin GE_LJOINpar(SEXP value, int ind); // GE_LJOINpar unused
SEXP GE_LJOINget(R_GE_linejoin ljoin); // GE_LJOINget unused
void GESetClip(double x1, double y1, double x2, double y2, pGEDevDesc dd); // GESetClip unused
void GENewPage(const pGEcontext gc, pGEDevDesc dd); // GENewPage unused
void GELine(double x1, double y1, double x2, double y2, // GELine unused
const pGEcontext gc, pGEDevDesc dd);
void GEPolyline(int n, double *x, double *y, // GEPolyline unused
const pGEcontext gc, pGEDevDesc dd);
void GEPolygon(int n, double *x, double *y, // GEPolygon unused
const pGEcontext gc, pGEDevDesc dd);
SEXP GEXspline(int n, double *x, double *y, double *s, Rboolean open, // GEXspline unused
Rboolean repEnds, Rboolean draw,
const pGEcontext gc, pGEDevDesc dd);
void GECircle(double x, double y, double radius, // GECircle unused
const pGEcontext gc, pGEDevDesc dd);
void GERect(double x0, double y0, double x1, double y1, // GERect unused
const pGEcontext gc, pGEDevDesc dd);
void GEPath(double *x, double *y, // GEPath unused
int npoly, int *nper,
Rboolean winding,
const pGEcontext gc, pGEDevDesc dd);
void GERaster(unsigned int *raster, int w, int h, // GERaster unused
double x, double y, double width, double height,
double angle, Rboolean interpolate,
const pGEcontext gc, pGEDevDesc dd);
SEXP GECap(pGEDevDesc dd); // GECap unused
void GEText(double x, double y, const char * const str, cetype_t enc, // GEText unused
double xc, double yc, double rot,
const pGEcontext gc, pGEDevDesc dd);
void GEMode(int mode, pGEDevDesc dd); // GEMode unused
void GESymbol(double x, double y, int pch, double size, // GESymbol unused
const pGEcontext gc, pGEDevDesc dd);
void GEPretty(double *lo, double *up, int *ndiv); // GEPretty unused
void GEMetricInfo(int c, const pGEcontext gc, // GEMetricInfo unused
double *ascent, double *descent, double *width,
pGEDevDesc dd);
double GEStrWidth(const char *str, cetype_t enc, // GEStrWidth unused
const pGEcontext gc, pGEDevDesc dd);
double GEStrHeight(const char *str, cetype_t enc, // GEStrHeight unused
const pGEcontext gc, pGEDevDesc dd);
void GEStrMetric(const char *str, cetype_t enc, const pGEcontext gc, // GEStrMetric unused
double *ascent, double *descent, double *width,
pGEDevDesc dd);
int GEstring_to_pch(SEXP pch); // GEstring_to_pch unused
unsigned int GE_LTYpar(SEXP, int);
SEXP GE_LTYget(unsigned int); // GE_LTYget unused
void R_GE_rasterScale(unsigned int *sraster, int sw, int sh, // R_GE_rasterScale unused
unsigned int *draster, int dw, int dh);
void R_GE_rasterInterpolate(unsigned int *sraster, int sw, int sh, // R_GE_rasterInterpolate unused
unsigned int *draster, int dw, int dh);
void R_GE_rasterRotatedSize(int w, int h, double angle, // R_GE_rasterRotatedSize unused
int *wnew, int *hnew);
void R_GE_rasterRotatedOffset(int w, int h, double angle, int botleft, // R_GE_rasterRotatedOffset unused
double *xoff, double *yoff);
void R_GE_rasterResizeForRotation(unsigned int *sraster, // R_GE_rasterResizeForRotation unused
int w, int h,
unsigned int *newRaster,
int wnew, int hnew,
const pGEcontext gc);
void R_GE_rasterRotate(unsigned int *sraster, int w, int h, double angle, // R_GE_rasterRotate unused
unsigned int *draster, const pGEcontext gc,
Rboolean perPixelAlpha);
double GEExpressionWidth(SEXP expr, // GEExpressionWidth unused
const pGEcontext gc, pGEDevDesc dd);
double GEExpressionHeight(SEXP expr, // GEExpressionHeight unused
const pGEcontext gc, pGEDevDesc dd);
void GEExpressionMetric(SEXP expr, const pGEcontext gc, // GEExpressionMetric unused
double *ascent, double *descent, double *width,
pGEDevDesc dd);
void GEMathText(double x, double y, SEXP expr, // GEMathText unused
double xc, double yc, double rot,
const pGEcontext gc, pGEDevDesc dd);
SEXP GEcontourLines(double *x, int nx, double *y, int ny, // GEcontourLines unused
double *z, double *levels, int nl);
double R_GE_VStrWidth(const char *s, cetype_t enc, const pGEcontext gc, pGEDevDesc dd); // R_GE_VStrWidth unused
double R_GE_VStrHeight(const char *s, cetype_t enc, const pGEcontext gc, pGEDevDesc dd); // R_GE_VStrHeight unused
void R_GE_VText(double x, double y, const char * const s, cetype_t enc, // R_GE_VText unused
double x_justify, double y_justify, double rotation,
const pGEcontext gc, pGEDevDesc dd);
pGEDevDesc GEcurrentDevice(void); // GEcurrentDevice used 9 times in RSVGTipsDevice, tikzDevice, cairoDevice
Rboolean GEdeviceDirty(pGEDevDesc dd); // GEdeviceDirty unused
void GEdirtyDevice(pGEDevDesc dd); // GEdirtyDevice unused
Rboolean GEcheckState(pGEDevDesc dd); // GEcheckState unused
Rboolean GErecording(SEXP call, pGEDevDesc dd); // GErecording unused
void GErecordGraphicOperation(SEXP op, SEXP args, pGEDevDesc dd); // GErecordGraphicOperation unused
void GEinitDisplayList(pGEDevDesc dd); // GEinitDisplayList used 8 times in RSVGTipsDevice, Cairo, RSvgDevice, JavaGD, rvg, svglite
void GEplayDisplayList(pGEDevDesc dd); // GEplayDisplayList used 5 times in Cairo, JavaGD, cairoDevice
void GEcopyDisplayList(int fromDevice); // GEcopyDisplayList unused
SEXP GEcreateSnapshot(pGEDevDesc dd); // GEcreateSnapshot used 1 times in Cairo
void GEplaySnapshot(SEXP snapshot, pGEDevDesc dd); // GEplaySnapshot unused
void GEonExit(void); // GEonExit unused
void GEnullDevice(void); // GEnullDevice unused
SEXP Rf_CreateAtVector(double*, double*, int, Rboolean); // Rf_CreateAtVector unused
// CreateAtVector unused
void Rf_GAxisPars(double *min, double *max, int *n, Rboolean log, int axis); // Rf_GAxisPars unused
// GAxisPars unused
}
# /Users/ls/Source/git/fastr/com.oracle.truffle.r.native/include/R_ext/Lapack.h
extern void ilaver_(int *major, int *minor, int *patch); // ilaver_ used 2 times in ltsk
extern "C" {
extern void
dbdsqr_(const char* uplo, const int* n, const int* ncvt,
const int* nru, const int* ncc, double* d, double* e,
double* vt, const int* ldvt, double* u, const int* ldu,
double* c, const int* ldc, double* work, int* info);
extern void
ddisna_(const char* job, const int* m, const int* n,
double* d, double* sep, int* info);
extern void
dgbbrd_(const char* vect, const int* m, const int* n,
const int* ncc, const int* kl, const int* ku,
double* ab, const int* ldab,
double* d, double* e, double* q,
const int* ldq, double* Rf_pt, const int* ldpt,
double* c, const int* ldc,
double* work, int* info);
extern void
dgbcon_(const char* norm, const int* n, const int* kl,
const int* ku, double* ab, const int* ldab,
int* ipiv, const double* anorm, double* rcond,
double* work, int* iwork, int* info);
extern void
dgbequ_(const int* m, const int* n, const int* kl, const int* ku,
double* ab, const int* ldab, double* r, double* c,
double* rowcnd, double* colcnd, double* amax, int* info);
extern void
dgbrfs_(const char* trans, const int* n, const int* kl,
const int* ku, const int* nrhs, double* ab,
const int* ldab, double* afb, const int* ldafb,
int* ipiv, double* b, const int* ldb,
double* x, const int* ldx, double* ferr, double* berr,
double* work, int* iwork, int* info);
extern void
dgbsv_(const int* n, const int* kl,const int* ku,
const int* nrhs, double* ab, const int* ldab,
int* ipiv, double* b, const int* ldb, int* info);
extern void
dgbsvx_(const int* fact, const char* trans,
const int* n, const int* kl,const int* ku,
const int* nrhs, double* ab, const int* ldab,
double* afb, const int* ldafb, int* ipiv,
const char* equed, double* r, double* c,
double* b, const int* ldb,
double* x, const int* ldx,
double* rcond, double* ferr, double* berr,
double* work, int* iwork, int* info);
extern void
dgbtf2_(const int* m, const int* n, const int* kl,const int* ku,
double* ab, const int* ldab, int* ipiv, int* info);
extern void
dgbtrf_(const int* m, const int* n, const int* kl,const int* ku,
double* ab, const int* ldab, int* ipiv, int* info);
extern void
dgbtrs_(const char* trans, const int* n,
const int* kl, const int* ku, const int* nrhs,
const double* ab, const int* ldab, const int* ipiv,
double* b, const int* ldb, int* info);
extern void
dgebak_(const char* job, const char* side, const int* n,
const int* ilo, const int* ihi, double* scale,
const int* m, double* v, const int* ldv, int* info);
extern void
dgebal_(const char* job, const int* n, double* a, const int* lda,
int* ilo, int* ihi, double* scale, int* info);
extern void
dgebd2_(const int* m, const int* n, double* a, const int* lda,
double* d, double* e, double* tauq, double* taup,
double* work, int* info);
extern void
dgebrd_(const int* m, const int* n, double* a, const int* lda,
double* d, double* e, double* tauq, double* taup,
double* work, const int* lwork, int* info);
extern void
dgecon_(const char* norm, const int* n,
const double* a, const int* lda,
const double* anorm, double* rcond,
double* work, int* iwork, int* info);
extern void
dgeequ_(const int* m, const int* n, double* a, const int* lda,
double* r, double* c, double* rowcnd, double* colcnd,
double* amax, int* info);
extern void
dgees_(const char* jobvs, const char* sort,
int (*select)(const double*, const double*),
const int* n, double* a, const int* lda,
int* sdim, double* wr, double* wi,
double* vs, const int* ldvs,
double* work, const int* lwork, int* bwork, int* info);
extern void
dgeesx_(const char* jobvs, const char* sort,
int (*select)(const double*, const double*),
const char* sense, const int* n, double* a,
const int* lda, int* sdim, double* wr, double* wi,
double* vs, const int* ldvs, double* rconde,
double* rcondv, double* work, const int* lwork,
int* iwork, const int* liwork, int* bwork, int* info);
extern void
dgeev_(const char* jobvl, const char* jobvr,
const int* n, double* a, const int* lda,
double* wr, double* wi, double* vl, const int* ldvl,
double* vr, const int* ldvr,
double* work, const int* lwork, int* info);
extern void
dgeevx_(const char* balanc, const char* jobvl, const char* jobvr,
const char* sense, const int* n, double* a, const int* lda,
double* wr, double* wi, double* vl, const int* ldvl,
double* vr, const int* ldvr, int* ilo, int* ihi,
double* scale, double* abnrm, double* rconde, double* rcondv,
double* work, const int* lwork, int* iwork, int* info);
extern void
dgegv_(const char* jobvl, const char* jobvr,
const int* n, double* a, const int* lda,
double* b, const int* ldb,
double* alphar, double* alphai,
const double* Rf_beta, double* vl, const int* ldvl,
double* vr, const int* ldvr,
double* work, const int* lwork, int* info);
extern void
dgehd2_(const int* n, const int* ilo, const int* ihi,
double* a, const int* lda, double* tau,
double* work, int* info);
extern void
dgehrd_(const int* n, const int* ilo, const int* ihi,
double* a, const int* lda, double* tau,
double* work, const int* lwork, int* info);
extern void
dgelq2_(const int* m, const int* n,
double* a, const int* lda, double* tau,
double* work, int* info);
extern void
dgelqf_(const int* m, const int* n,
double* a, const int* lda, double* tau,
double* work, const int* lwork, int* info);
extern void
dgels_(const char* trans, const int* m, const int* n,
const int* nrhs, double* a, const int* lda,
double* b, const int* ldb,
double* work, const int* lwork, int* info);
extern void
dgelss_(const int* m, const int* n, const int* nrhs,
double* a, const int* lda, double* b, const int* ldb,
double* s, double* rcond, int* rank,
double* work, const int* lwork, int* info);
extern void
dgelsy_(const int* m, const int* n, const int* nrhs,
double* a, const int* lda, double* b, const int* ldb,
int* jpvt, const double* rcond, int* rank,
double* work, const int* lwork, int* info);
extern void
dgeql2_(const int* m, const int* n, double* a, const int* lda,
double* tau, double* work, int* info);
extern void
dgeqlf_(const int* m, const int* n,
double* a, const int* lda, double* tau,
double* work, const int* lwork, int* info);
extern void
dgeqp3_(const int* m, const int* n, double* a, const int* lda,
int* jpvt, double* tau, double* work, const int* lwork,
int* info);
extern void
dgeqpf_(const int* m, const int* n, double* a, const int* lda,
int* jpvt, double* tau, double* work, int* info);
extern void
dgeqr2_(const int* m, const int* n, double* a, const int* lda,
double* tau, double* work, int* info);
extern void
dgeqrf_(const int* m, const int* n, double* a, const int* lda,
double* tau, double* work, const int* lwork, int* info);
extern void
dgerfs_(const char* trans, const int* n, const int* nrhs,
double* a, const int* lda, double* af, const int* ldaf,
int* ipiv, double* b, const int* ldb,
double* x, const int* ldx, double* ferr, double* berr,
double* work, int* iwork, int* info);
extern void
dgerq2_(const int* m, const int* n, double* a, const int* lda,
double* tau, double* work, int* info);
extern void
dgerqf_(const int* m, const int* n, double* a, const int* lda,
double* tau, double* work, const int* lwork, int* info);
extern void
dgesv_(const int* n, const int* nrhs, double* a, const int* lda,
int* ipiv, double* b, const int* ldb, int* info);
extern void
dgesvd_(const char* jobu, const char* jobvt, const int* m,
const int* n, double* a, const int* lda, double* s,
double* u, const int* ldu, double* vt, const int* ldvt,
double* work, const int* lwork, int* info);
extern void
dgesvx_(const char* fact, const char* trans, const int* n,
const int* nrhs, double* a, const int* lda,
double* af, const int* ldaf, int* ipiv,
char *equed, double* r, double* c,
double* b, const int* ldb,
double* x, const int* ldx,
double* rcond, double* ferr, double* berr,
double* work, int* iwork, int* info);
extern void
dgetf2_(const int* m, const int* n, double* a, const int* lda,
int* ipiv, int* info);
extern void
dgetrf_(const int* m, const int* n, double* a, const int* lda,
int* ipiv, int* info);
extern void
dgetri_(const int* n, double* a, const int* lda,
int* ipiv, double* work, const int* lwork, int* info);
extern void
dgetrs_(const char* trans, const int* n, const int* nrhs,
const double* a, const int* lda, const int* ipiv,
double* b, const int* ldb, int* info);
extern void
dggbak_(const char* job, const char* side,
const int* n, const int* ilo, const int* ihi,
double* lscale, double* rscale, const int* m,
double* v, const int* ldv, int* info);
extern void
dggbal_(const char* job, const int* n, double* a, const int* lda,
double* b, const int* ldb, int* ilo, int* ihi,
double* lscale, double* rscale, double* work, int* info);
extern void
dgges_(const char* jobvsl, const char* jobvsr, const char* sort,
int (*delztg)(double*, double*, double*),
const int* n, double* a, const int* lda,
double* b, const int* ldb, double* alphar,
double* alphai, const double* Rf_beta,
double* vsl, const int* ldvsl,
double* vsr, const int* ldvsr,
double* work, const int* lwork, int* bwork, int* info);
extern void
dggglm_(const int* n, const int* m, const int* p,
double* a, const int* lda, double* b, const int* ldb,
double* d, double* x, double* y,
double* work, const int* lwork, int* info);
extern void
dgghrd_(const char* compq, const char* compz, const int* n,
const int* ilo, const int* ihi, double* a, const int* lda,
double* b, const int* ldb, double* q, const int* ldq,
double* z, const int* ldz, int* info);
extern void
dgglse_(const int* m, const int* n, const int* p,
double* a, const int* lda,
double* b, const int* ldb,
double* c, double* d, double* x,
double* work, const int* lwork, int* info);
extern void
dggqrf_(const int* n, const int* m, const int* p,
double* a, const int* lda, double* taua,
double* b, const int* ldb, double* taub,
double* work, const int* lwork, int* info);
extern void
dggrqf_(const int* m, const int* p, const int* n,
double* a, const int* lda, double* taua,
double* b, const int* ldb, double* taub,
double* work, const int* lwork, int* info);
extern void
dggsvd_(const char* jobu, const char* jobv, const char* jobq,
const int* m, const int* n, const int* p,
const int* k, const int* l,
double* a, const int* lda,
double* b, const int* ldb,
const double* alpha, const double* Rf_beta,
double* u, const int* ldu,
double* v, const int* ldv,
double* q, const int* ldq,
double* work, int* iwork, int* info);
extern void
dgtcon_(const char* norm, const int* n, double* dl, double* d,
double* du, double* du2, int* ipiv, const double* anorm,
double* rcond, double* work, int* iwork, int* info);
extern void
dgtrfs_(const char* trans, const int* n, const int* nrhs,
double* dl, double* d, double* du, double* dlf,
double* Rf_df, double* duf, double* du2,
int* ipiv, double* b, const int* ldb,
double* x, const int* ldx,
double* ferr, double* berr,
double* work, int* iwork, int* info);
extern void
dgtsv_(const int* n, const int* nrhs,
double* dl, double* d, double* du,
double* b, const int* ldb, int* info);
extern void
dgtsvx_(const int* fact, const char* trans,
const int* n, const int* nrhs,
double* dl, double* d, double* du,
double* dlf, double* Rf_df, double* duf,
double* du2, int* ipiv,
double* b, const int* ldb,
double* x, const int* ldx,
double* rcond, double* ferr, double* berr,
double* work, int* iwork, int* info);
extern void
dgttrf_(const int* n, double* dl, double* d,
double* du, double* du2, int* ipiv, int* info);
extern void
dgttrs_(const char* trans, const int* n, const int* nrhs,
double* dl, double* d, double* du, double* du2,
int* ipiv, double* b, const int* ldb, int* info);
extern void
dopgtr_(const char* uplo, const int* n,
const double* ap, const double* tau,
double* q, const int* ldq,
double* work, int* info);
extern void
dopmtr_(const char* side, const char* uplo,
const char* trans, const int* m, const int* n,
const double* ap, const double* tau,
double* c, const int* ldc,
double* work, int* info);
extern void
dorg2l_(const int* m, const int* n, const int* k,
double* a, const int* lda,
const double* tau, double* work, int* info);
extern void
dorg2r_(const int* m, const int* n, const int* k,
double* a, const int* lda,
const double* tau, double* work, int* info);
extern void
dorgbr_(const char* vect, const int* m,
const int* n, const int* k,
double* a, const int* lda,
const double* tau, double* work,
const int* lwork, int* info);
extern void
dorghr_(const int* n, const int* ilo, const int* ihi,
double* a, const int* lda, const double* tau,
double* work, const int* lwork, int* info);
extern void
dorgl2_(const int* m, const int* n, const int* k,
double* a, const int* lda, const double* tau,
double* work, int* info);
extern void
dorglq_(const int* m, const int* n, const int* k,
double* a, const int* lda,
const double* tau, double* work,
const int* lwork, int* info);
extern void
dorgql_(const int* m, const int* n, const int* k,
double* a, const int* lda,
const double* tau, double* work,
const int* lwork, int* info);
extern void
dorgqr_(const int* m, const int* n, const int* k,
double* a, const int* lda, const double* tau,
double* work, const int* lwork, int* info);
extern void
dorgr2_(const int* m, const int* n, const int* k,
double* a, const int* lda, const double* tau,
double* work, int* info);
extern void
dorgrq_(const int* m, const int* n, const int* k,
double* a, const int* lda, const double* tau,
double* work, const int* lwork, int* info);
extern void
dorgtr_(const char* uplo, const int* n,
double* a, const int* lda, const double* tau,
double* work, const int* lwork, int* info);
extern void
dorm2l_(const char* side, const char* trans,
const int* m, const int* n, const int* k,
const double* a, const int* lda,
const double* tau, double* c, const int* ldc,
double* work, int* info);
extern void
dorm2r_(const char* side, const char* trans,
const int* m, const int* n, const int* k,
const double* a, const int* lda, const double* tau,
double* c, const int* ldc, double* work, int* info);
extern void
dormbr_(const char* vect, const char* side, const char* trans,
const int* m, const int* n, const int* k,
const double* a, const int* lda, const double* tau,
double* c, const int* ldc,
double* work, const int* lwork, int* info);
extern void
dormhr_(const char* side, const char* trans, const int* m,
const int* n, const int* ilo, const int* ihi,
const double* a, const int* lda, const double* tau,
double* c, const int* ldc,
double* work, const int* lwork, int* info);
extern void
dorml2_(const char* side, const char* trans,
const int* m, const int* n, const int* k,
const double* a, const int* lda, const double* tau,
double* c, const int* ldc, double* work, int* info);
extern void
dormlq_(const char* side, const char* trans,
const int* m, const int* n, const int* k,
const double* a, const int* lda,
const double* tau, double* c, const int* ldc,
double* work, const int* lwork, int* info);
extern void
dormql_(const char* side, const char* trans,
const int* m, const int* n, const int* k,
const double* a, const int* lda,
const double* tau, double* c, const int* ldc,
double* work, const int* lwork, int* info);
extern void
dormqr_(const char* side, const char* trans,
const int* m, const int* n, const int* k,
const double* a, const int* lda,
const double* tau, double* c, const int* ldc,
double* work, const int* lwork, int* info);
extern void
dormr2_(const char* side, const char* trans,
const int* m, const int* n, const int* k,
const double* a, const int* lda,
const double* tau, double* c, const int* ldc,
double* work, int* info);
extern void
dormrq_(const char* side, const char* trans,
const int* m, const int* n, const int* k,
const double* a, const int* lda,
const double* tau, double* c, const int* ldc,
double* work, const int* lwork, int* info);
extern void
dormtr_(const char* side, const char* uplo,
const char* trans, const int* m, const int* n,
const double* a, const int* lda,
const double* tau, double* c, const int* ldc,
double* work, const int* lwork, int* info);
extern void
dpbcon_(const char* uplo, const int* n, const int* kd,
const double* ab, const int* ldab,
const double* anorm, double* rcond,
double* work, int* iwork, int* info);
extern void
dpbequ_(const char* uplo, const int* n, const int* kd,
const double* ab, const int* ldab,
double* s, double* scond, double* amax, int* info);
extern void
dpbrfs_(const char* uplo, const int* n,
const int* kd, const int* nrhs,
const double* ab, const int* ldab,
const double* afb, const int* ldafb,
const double* b, const int* ldb,
double* x, const int* ldx,
double* ferr, double* berr,
double* work, int* iwork, int* info);
extern void
dpbstf_(const char* uplo, const int* n, const int* kd,
double* ab, const int* ldab, int* info);
extern void
dpbsv_(const char* uplo, const int* n,
const int* kd, const int* nrhs,
double* ab, const int* ldab,
double* b, const int* ldb, int* info);
extern void
dpbsvx_(const int* fact, const char* uplo, const int* n,
const int* kd, const int* nrhs,
double* ab, const int* ldab,
double* afb, const int* ldafb,
char* equed, double* s,
double* b, const int* ldb,
double* x, const int* ldx, double* rcond,
double* ferr, double* berr,
double* work, int* iwork, int* info);
extern void
dpbtf2_(const char* uplo, const int* n, const int* kd,
double* ab, const int* ldab, int* info);
extern void
dpbtrf_(const char* uplo, const int* n, const int* kd,
double* ab, const int* ldab, int* info);
extern void
dpbtrs_(const char* uplo, const int* n,
const int* kd, const int* nrhs,
const double* ab, const int* ldab,
double* b, const int* ldb, int* info);
extern void
dpocon_(const char* uplo, const int* n,
const double* a, const int* lda,
const double* anorm, double* rcond,
double* work, int* iwork, int* info);
extern void
dpoequ_(const int* n, const double* a, const int* lda,
double* s, double* scond, double* amax, int* info);
extern void
dporfs_(const char* uplo, const int* n, const int* nrhs,
const double* a, const int* lda,
const double* af, const int* ldaf,
const double* b, const int* ldb,
double* x, const int* ldx,
double* ferr, double* berr,
double* work, int* iwork, int* info);
extern void
dposv_(const char* uplo, const int* n, const int* nrhs,
double* a, const int* lda,
double* b, const int* ldb, int* info);
extern void
dposvx_(const int* fact, const char* uplo,
const int* n, const int* nrhs,
double* a, const int* lda,
double* af, const int* ldaf, char* equed,
double* s, double* b, const int* ldb,
double* x, const int* ldx, double* rcond,
double* ferr, double* berr, double* work,
int* iwork, int* info);
extern void
dpotf2_(const char* uplo, const int* n,
double* a, const int* lda, int* info);
extern void
dpotrf_(const char* uplo, const int* n,
double* a, const int* lda, int* info);
extern void
dpotri_(const char* uplo, const int* n,
double* a, const int* lda, int* info);
extern void
dpotrs_(const char* uplo, const int* n,
const int* nrhs, const double* a, const int* lda,
double* b, const int* ldb, int* info);
extern void
dppcon_(const char* uplo, const int* n,
const double* ap, const double* anorm, double* rcond,
double* work, int* iwork, int* info);
extern void
dppequ_(const char* uplo, const int* n,
const double* ap, double* s, double* scond,
double* amax, int* info);
extern void
dpprfs_(const char* uplo, const int* n, const int* nrhs,
const double* ap, const double* afp,
const double* b, const int* ldb,
double* x, const int* ldx,
double* ferr, double* berr,
double* work, int* iwork, int* info);
extern void
dppsv_(const char* uplo, const int* n,
const int* nrhs, const double* ap,
double* b, const int* ldb, int* info);
extern void
dppsvx_(const int* fact, const char* uplo,
const int* n, const int* nrhs, double* ap,
double* afp, char* equed, double* s,
double* b, const int* ldb,
double* x, const int* ldx,
double* rcond, double* ferr, double* berr,
double* work, int* iwork, int* info);
extern void
dpptrf_(const char* uplo, const int* n, double* ap, int* info);
extern void
dpptri_(const char* uplo, const int* n, double* ap, int* info);
extern void
dpptrs_(const char* uplo, const int* n,
const int* nrhs, const double* ap,
double* b, const int* ldb, int* info);
extern void
dptcon_(const int* n,
const double* d, const double* e,
const double* anorm, double* rcond,
double* work, int* info);
extern void
dpteqr_(const char* compz, const int* n, double* d,
double* e, double* z, const int* ldz,
double* work, int* info);
extern void
dptrfs_(const int* n, const int* nrhs,
const double* d, const double* e,
const double* Rf_df, const double* ef,
const double* b, const int* ldb,
double* x, const int* ldx,
double* ferr, double* berr,
double* work, int* info);
extern void
dptsv_(const int* n, const int* nrhs, double* d,
double* e, double* b, const int* ldb, int* info);
extern void
dptsvx_(const int* fact, const int* n,
const int* nrhs,
const double* d, const double* e,
double* Rf_df, double* ef,
const double* b, const int* ldb,
double* x, const int* ldx, double* rcond,
double* ferr, double* berr,
double* work, int* info);
extern void
dpttrf_(const int* n, double* d, double* e, int* info);
extern void
dpttrs_(const int* n, const int* nrhs,
const double* d, const double* e,
double* b, const int* ldb, int* info);
extern void
drscl_(const int* n, const double* da,
double* x, const int* incx);
extern void
dsbev_(const char* jobz, const char* uplo,
const int* n, const int* kd,
double* ab, const int* ldab,
double* w, double* z, const int* ldz,
double* work, int* info);
extern void
dsbevd_(const char* jobz, const char* uplo,
const int* n, const int* kd,
double* ab, const int* ldab,
double* w, double* z, const int* ldz,
double* work, const int* lwork,
int* iwork, const int* liwork, int* info);
extern void
dsbevx_(const char* jobz, const char* range,
const char* uplo, const int* n, const int* kd,
double* ab, const int* ldab,
double* q, const int* ldq,
const double* vl, const double* vu,
const int* il, const int* iu,
const double* abstol,
int* m, double* w,
double* z, const int* ldz,
double* work, int* iwork,
int* ifail, int* info);
extern void
dsbgst_(const char* vect, const char* uplo,
const int* n, const int* ka, const int* kb,
double* ab, const int* ldab,
double* bb, const int* ldbb,
double* x, const int* ldx,
double* work, int* info);
extern void
dsbgv_(const char* jobz, const char* uplo,
const int* n, const int* ka, const int* kb,
double* ab, const int* ldab,
double* bb, const int* ldbb,
double* w, double* z, const int* ldz,
double* work, int* info);
extern void
dsbtrd_(const char* vect, const char* uplo,
const int* n, const int* kd,
double* ab, const int* ldab,
double* d, double* e,
double* q, const int* ldq,
double* work, int* info);
extern void
dspcon_(const char* uplo, const int* n,
const double* ap, const int* ipiv,
const double* anorm, double* rcond,
double* work, int* iwork, int* info);
extern void
dspev_(const char* jobz, const char* uplo, const int* n,
double* ap, double* w, double* z, const int* ldz,
double* work, int* info);
extern void
dspevd_(const char* jobz, const char* uplo,
const int* n, double* ap, double* w,
double* z, const int* ldz,
double* work, const int* lwork,
int* iwork, const int* liwork, int* info);
extern void
dspevx_(const char* jobz, const char* range,
const char* uplo, const int* n, double* ap,
const double* vl, const double* vu,
const int* il, const int* iu,
const double* abstol,
int* m, double* w,
double* z, const int* ldz,
double* work, int* iwork,
int* ifail, int* info);
extern void
dspgst_(const int* itype, const char* uplo,
const int* n, double* ap, double* bp, int* info);
extern void
dspgv_(const int* itype, const char* jobz,
const char* uplo, const int* n,
double* ap, double* bp, double* w,
double* z, const int* ldz,
double* work, int* info);
extern void
dsprfs_(const char* uplo, const int* n,
const int* nrhs, const double* ap,
const double* afp, const int* ipiv,
const double* b, const int* ldb,
double* x, const int* ldx,
double* ferr, double* berr,
double* work, int* iwork, int* info);
extern void
dspsv_(const char* uplo, const int* n,
const int* nrhs, double* ap, int* ipiv,
double* b, const int* ldb, int* info);
extern void
dspsvx_(const int* fact, const char* uplo,
const int* n, const int* nrhs,
const double* ap, double* afp, int* ipiv,
const double* b, const int* ldb,
double* x, const int* ldx,
double* rcond, double* ferr, double* berr,
double* work, int* iwork, int* info);
extern void
dsptrd_(const char* uplo, const int* n,
double* ap, double* d, double* e,
double* tau, int* info);
extern void
dsptrf_(const char* uplo, const int* n,
double* ap, int* ipiv, int* info);
extern void
dsptri_(const char* uplo, const int* n,
double* ap, const int* ipiv,
double* work, int* info);
extern void
dsptrs_(const char* uplo, const int* n,
const int* nrhs, const double* ap,
const int* ipiv, double* b, const int* ldb, int* info);
extern void
dstebz_(const char* range, const char* order, const int* n,
const double* vl, const double* vu,
const int* il, const int* iu,
const double *abstol,
const double* d, const double* e,
int* m, int* nsplit, double* w,
int* iblock, int* isplit,
double* work, int* iwork,
int* info);
extern void
dstedc_(const char* compz, const int* n,
double* d, double* e,
double* z, const int* ldz,
double* work, const int* lwork,
int* iwork, const int* liwork, int* info);
extern void
dstein_(const int* n, const double* d, const double* e,
const int* m, const double* w,
const int* iblock, const int* isplit,
double* z, const int* ldz,
double* work, int* iwork,
int* ifail, int* info);
extern void
dsteqr_(const char* compz, const int* n, double* d, double* e,
double* z, const int* ldz, double* work, int* info);
extern void
dsterf_(const int* n, double* d, double* e, int* info);
extern void
dstev_(const char* jobz, const int* n,
double* d, double* e,
double* z, const int* ldz,
double* work, int* info);
extern void
dstevd_(const char* jobz, const int* n,
double* d, double* e,
double* z, const int* ldz,
double* work, const int* lwork,
int* iwork, const int* liwork, int* info);
extern void
dstevx_(const char* jobz, const char* range,
const int* n, double* d, double* e,
const double* vl, const double* vu,
const int* il, const int* iu,
const double* abstol,
int* m, double* w,
double* z, const int* ldz,
double* work, int* iwork,
int* ifail, int* info);
extern void
dsycon_(const char* uplo, const int* n,
const double* a, const int* lda,
const int* ipiv,
const double* anorm, double* rcond,
double* work, int* iwork, int* info);
extern void
dsyev_(const char* jobz, const char* uplo,
const int* n, double* a, const int* lda,
double* w, double* work, const int* lwork, int* info);
extern void
dsyevd_(const char* jobz, const char* uplo,
const int* n, double* a, const int* lda,
double* w, double* work, const int* lwork,
int* iwork, const int* liwork, int* info);
extern void
dsyevx_(const char* jobz, const char* range,
const char* uplo, const int* n,
double* a, const int* lda,
const double* vl, const double* vu,
const int* il, const int* iu,
const double* abstol,
int* m, double* w,
double* z, const int* ldz,
double* work, const int* lwork, int* iwork,
int* ifail, int* info);
extern void
dsyevr_(const char *jobz, const char *range, const char *uplo,
const int *n, double *a, const int *lda,
const double *vl, const double *vu,
const int *il, const int *iu,
const double *abstol, int *m, double *w,
double *z, const int *ldz, int *isuppz,
double *work, const int *lwork,
int *iwork, const int *liwork,
int *info);
extern void
dsygs2_(const int* itype, const char* uplo,
const int* n, double* a, const int* lda,
const double* b, const int* ldb, int* info);
extern void
dsygst_(const int* itype, const char* uplo,
const int* n, double* a, const int* lda,
const double* b, const int* ldb, int* info);
extern void
dsygv_(const int* itype, const char* jobz,
const char* uplo, const int* n,
double* a, const int* lda,
double* b, const int* ldb,
double* w, double* work, const int* lwork,
int* info);
extern void
dsyrfs_(const char* uplo, const int* n,
const int* nrhs,
const double* a, const int* lda,
const double* af, const int* ldaf,
const int* ipiv,
const double* b, const int* ldb,
double* x, const int* ldx,
double* ferr, double* berr,
double* work, int* iwork, int* info);
extern void
dsysv_(const char* uplo, const int* n,
const int* nrhs,
double* a, const int* lda, int* ipiv,
double* b, const int* ldb,
double* work, const int* lwork, int* info);
extern void
dsysvx_(const int* fact, const char* uplo,
const int* n, const int* nrhs,
const double* a, const int* lda,
double* af, const int* ldaf, int* ipiv,
const double* b, const int* ldb,
double* x, const int* ldx, double* rcond,
double* ferr, double* berr,
double* work, const int* lwork,
int* iwork, int* info);
extern void
dsytd2_(const char* uplo, const int* n,
double* a, const int* lda,
double* d, double* e, double* tau,
int* info);
extern void
dsytf2_(const char* uplo, const int* n,
double* a, const int* lda,
int* ipiv, int* info);
extern void
dsytrd_(const char* uplo, const int* n,
double* a, const int* lda,
double* d, double* e, double* tau,
double* work, const int* lwork, int* info);
extern void
dsytrf_(const char* uplo, const int* n,
double* a, const int* lda, int* ipiv,
double* work, const int* lwork, int* info);
extern void
dsytri_(const char* uplo, const int* n,
double* a, const int* lda, const int* ipiv,
double* work, int* info);
extern void
dsytrs_(const char* uplo, const int* n,
const int* nrhs,
const double* a, const int* lda,
const int* ipiv,
double* b, const int* ldb, int* info);
extern void
dtbcon_(const char* norm, const char* uplo,
const char* diag, const int* n, const int* kd,
const double* ab, const int* ldab,
double* rcond, double* work,
int* iwork, int* info);
extern void
dtbrfs_(const char* uplo, const char* trans,
const char* diag, const int* n, const int* kd,
const int* nrhs,
const double* ab, const int* ldab,
const double* b, const int* ldb,
double* x, const int* ldx,
double* ferr, double* berr,
double* work, int* iwork, int* info);
extern void
dtbtrs_(const char* uplo, const char* trans,
const char* diag, const int* n,
const int* kd, const int* nrhs,
const double* ab, const int* ldab,
double* b, const int* ldb, int* info);
extern void
dtgevc_(const char* side, const char* howmny,
const int* select, const int* n,
const double* a, const int* lda,
const double* b, const int* ldb,
double* vl, const int* ldvl,
double* vr, const int* ldvr,
const int* mm, int* m, double* work, int* info);
extern void
dtgsja_(const char* jobu, const char* jobv, const char* jobq,
const int* m, const int* p, const int* n,
const int* k, const int* l,
double* a, const int* lda,
double* b, const int* ldb,
const double* tola, const double* tolb,
double* alpha, double* Rf_beta,
double* u, const int* ldu,
double* v, const int* ldv,
double* q, const int* ldq,
double* work, int* ncycle, int* info);
extern void
dtpcon_(const char* norm, const char* uplo,
const char* diag, const int* n,
const double* ap, double* rcond,
double* work, int* iwork, int* info);
extern void
dtprfs_(const char* uplo, const char* trans,
const char* diag, const int* n,
const int* nrhs, const double* ap,
const double* b, const int* ldb,
double* x, const int* ldx,
double* ferr, double* berr,
double* work, int* iwork, int* info);
extern void
dtptri_(const char* uplo, const char* diag,
const int* n, double* ap, int* info);
extern void
dtptrs_(const char* uplo, const char* trans,
const char* diag, const int* n,
const int* nrhs, const double* ap,
double* b, const int* ldb, int* info);
extern void
dtrcon_(const char* norm, const char* uplo,
const char* diag, const int* n,
const double* a, const int* lda,
double* rcond, double* work,
int* iwork, int* info);
extern void
dtrevc_(const char* side, const char* howmny,
const int* select, const int* n,
const double* t, const int* ldt,
double* vl, const int* ldvl,
double* vr, const int* ldvr,
const int* mm, int* m, double* work, int* info);
extern void
dtrexc_(const char* compq, const int* n,
double* t, const int* ldt,
double* q, const int* ldq,
int* ifst, int* ILST,
double* work, int* info);
extern void
dtrrfs_(const char* uplo, const char* trans,
const char* diag, const int* n, const int* nrhs,
const double* a, const int* lda,
const double* b, const int* ldb,
double* x, const int* ldx,
double* ferr, double* berr,
double* work, int* iwork, int* info);
extern void
dtrsen_(const char* job, const char* compq,
const int* select, const int* n,
double* t, const int* ldt,
double* q, const int* ldq,
double* wr, double* wi,
int* m, double* s, double* sep,
double* work, const int* lwork,
int* iwork, const int* liwork, int* info);
extern void
dtrsna_(const char* job, const char* howmny,
const int* select, const int* n,
const double* t, const int* ldt,
const double* vl, const int* ldvl,
const double* vr, const int* ldvr,
double* s, double* sep, const int* mm,
int* m, double* work, const int* lwork,
int* iwork, int* info);
extern void
dtrsyl_(const char* trana, const char* tranb,
const int* isgn, const int* m, const int* n,
const double* a, const int* lda,
const double* b, const int* ldb,
double* c, const int* ldc,
double* scale, int* info);
extern void
dtrti2_(const char* uplo, const char* diag,
const int* n, double* a, const int* lda,
int* info);
extern void
dtrtri_(const char* uplo, const char* diag,
const int* n, double* a, const int* lda,
int* info);
extern void
dtrtrs_(const char* uplo, const char* trans,
const char* diag, const int* n, const int* nrhs,
const double* a, const int* lda,
double* b, const int* ldb, int* info);
extern void
dtzrqf_(const int* m, const int* n,
double* a, const int* lda,
double* tau, int* info);
extern void
dhgeqz_(const char* job, const char* compq, const char* compz,
const int* n, const int *ILO, const int* IHI,
double* a, const int* lda,
double* b, const int* ldb,
double* alphar, double* alphai, const double* Rf_beta,
double* q, const int* ldq,
double* z, const int* ldz,
double* work, const int* lwork, int* info);
extern void
dhsein_(const char* side, const char* eigsrc,
const char* initv, int* select,
const int* n, double* h, const int* ldh,
double* wr, double* wi,
double* vl, const int* ldvl,
double* vr, const int* ldvr,
const int* mm, int* m, double* work,
int* ifaill, int* ifailr, int* info);
extern void
dhseqr_(const char* job, const char* compz, const int* n,
const int* ilo, const int* ihi,
double* h, const int* ldh,
double* wr, double* wi,
double* z, const int* ldz,
double* work, const int* lwork, int* info);
extern void
dlabad_(double* small, double* large);
extern void
dlabrd_(const int* m, const int* n, const int* nb,
double* a, const int* lda, double* d, double* e,
double* tauq, double* taup,
double* x, const int* ldx, double* y, const int* ldy);
extern void
dlacon_(const int* n, double* v, double* x,
int* isgn, double* est, int* kase);
extern void
dlacpy_(const char* uplo, const int* m, const int* n,
const double* a, const int* lda,
double* b, const int* ldb);
extern void
dladiv_(const double* a, const double* b,
const double* c, const double* d,
double* p, double* q);
extern void
dlae2_(const double* a, const double* b, const double* c,
double* rt1, double* rt2);
extern void
dlaebz_(const int* ijob, const int* nitmax, const int* n,
const int* mmax, const int* minp, const int* nbmin,
const double* abstol, const double* reltol,
const double* pivmin, double* d, double* e,
double* e2, int* nval, double* ab, double* c,
int* mout, int* nab, double* work, int* iwork,
int* info);
extern void
dlaed0_(const int* icompq, const int* qsiz, const int* n,
double* d, double* e, double* q, const int* ldq,
double* qstore, const int* ldqs,
double* work, int* iwork, int* info);
extern void
dlaed1_(const int* n, double* d, double* q, const int* ldq,
int* indxq, const double* rho, const int* cutpnt,
double* work, int* iwork, int* info);
extern void
dlaed2_(const int* k, const int* n, double* d,
double* q, const int* ldq, int* indxq,
double* rho, double* z,
double* dlamda, double* w, double* q2,
int* indx, int* indxc, int* indxp,
int* coltyp, int* info);
extern void
dlaed3_(const int* k, const int* n, const int* n1,
double* d, double* q, const int* ldq,
const double* rho, double* dlamda, double* q2,
int* indx, int* ctot, double* w,
double* s, int* info);
extern void
dlaed4_(const int* n, const int* i, const double* d,
const double* z, const double* delta,
const double* rho, double* dlam, int* info);
extern void
dlaed5_(const int* i, const double* d, const double* z,
double* delta, const double* rho, double* dlam);
extern void
dlaed6_(const int* kniter, const int* orgati,
const double* rho, const double* d,
const double* z, const double* finit,
double* tau, int* info);
extern void
dlaed7_(const int* icompq, const int* n,
const int* qsiz, const int* tlvls,
const int* curlvl, const int* curpbm,
double* d, double* q, const int* ldq,
int* indxq, const double* rho, const int* cutpnt,
double* qstore, double* qptr, const int* prmptr,
const int* perm, const int* givptr,
const int* givcol, const double* givnum,
double* work, int* iwork, int* info);
extern void
dlaed8_(const int* icompq, const int* k,
const int* n, const int* qsiz,
double* d, double* q, const int* ldq,
const int* indxq, double* rho,
const int* cutpnt, const double* z,
double* dlamda, double* q2, const int* ldq2,
double* w, int* perm, int* givptr,
int* givcol, double* givnum, int* indxp,
int* indx, int* info);
extern void
dlaed9_(const int* k, const int* kstart, const int* kstop,
const int* n, double* d, double* q, const int* ldq,
const double* rho, const double* dlamda,
const double* w, double* s, const int* lds, int* info);
extern void
dlaeda_(const int* n, const int* tlvls, const int* curlvl,
const int* curpbm, const int* prmptr, const int* perm,
const int* givptr, const int* givcol,
const double* givnum, const double* q,
const int* qptr, double* z, double* ztemp, int* info);
extern void
dlaein_(const int* rightv, const int* noinit, const int* n,
const double* h, const int* ldh,
const double* wr, const double* wi,
double* vr, double* vi,
double* b, const int* ldb, double* work,
const double* eps3, const double* smlnum,
const double* bignum, int* info);
extern void
dlaev2_(const double* a, const double* b, const double* c,
double* rt1, double* rt2, double* cs1, double *sn1);
extern void
dlaexc_(const int* wantq, const int* n, double* t, const int* ldt,
double* q, const int* ldq, const int* j1,
const int* n1, const int* n2, double* work, int* info);
extern void
dlag2_(const double* a, const int* lda, const double* b,
const int* ldb, const double* safmin,
double* scale1, double* scale2,
double* wr1, double* wr2, double* wi);
extern void
dlags2_(const int* upper,
const double* a1, const double* a2, const double* a3,
const double* b1, const double* b2, const double* b3,
double* csu, double* snu,
double* csv, double* snv, double *csq, double *snq);
extern void
dlagtf_(const int* n, double* a, const double* lambda,
double* b, double* c, const double *tol,
double* d, int* in, int* info);
extern void
dlagtm_(const char* trans, const int* n, const int* nrhs,
const double* alpha, const double* dl,
const double* d, const double* du,
const double* x, const int* ldx, const double* Rf_beta,
double* b, const int* ldb);
extern void
dlagts_(const int* job, const int* n,
const double* a, const double* b,
const double* c, const double* d,
const int* in, double* y, double* tol, int* info);
extern void
dlahqr_(const int* wantt, const int* wantz, const int* n,
const int* ilo, const int* ihi,
double* H, const int* ldh, double* wr, double* wi,
const int* iloz, const int* ihiz,
double* z, const int* ldz, int* info);
extern void
dlahrd_(const int* n, const int* k, const int* nb,
double* a, const int* lda,
double* tau, double* t, const int* ldt,
double* y, const int* ldy);
extern void
dlaic1_(const int* job, const int* j, const double* x,
const double* sest, const double* w,
const double* gamma, double* sestpr,
double* s, double* c);
extern void
dlaln2_(const int* ltrans, const int* na, const int* nw,
const double* smin, const double* ca,
const double* a, const int* lda,
const double* d1, const double* d2,
const double* b, const int* ldb,
const double* wr, const double* wi,
double* x, const int* ldx, double* scale,
double* xnorm, int* info);
extern double
dlamch_(const char* cmach);
extern void
dlamrg_(const int* n1, const int* n2, const double* a,
const int* dtrd1, const int* dtrd2, int* index);
extern double
dlangb_(const char* norm, const int* n,
const int* kl, const int* ku, const double* ab,
const int* ldab, double* work);
extern double
dlange_(const char* norm, const int* m, const int* n,
const double* a, const int* lda, double* work);
extern double
dlangt_(const char* norm, const int* n,
const double* dl, const double* d,
const double* du);
extern double
dlanhs_(const char* norm, const int* n,
const double* a, const int* lda, double* work);
extern double
dlansb_(const char* norm, const char* uplo,
const int* n, const int* k,
const double* ab, const int* ldab, double* work);
extern double
dlansp_(const char* norm, const char* uplo,
const int* n, const double* ap, double* work);
extern double
dlanst_(const char* norm, const int* n,
const double* d, const double* e);
extern double
dlansy_(const char* norm, const char* uplo, const int* n,
const double* a, const int* lda, double* work);
extern double
dlantb_(const char* norm, const char* uplo,
const char* diag, const int* n, const int* k,
const double* ab, const int* ldab, double* work);
extern double
dlantp_(const char* norm, const char* uplo, const char* diag,
const int* n, const double* ap, double* work);
extern double
dlantr_(const char* norm, const char* uplo,
const char* diag, const int* m, const int* n,
const double* a, const int* lda, double* work);
extern void
dlanv2_(double* a, double* b, double* c, double* d,
double* rt1r, double* rt1i, double* rt2r, double* rt2i,
double* cs, double *sn);
extern void
dlapll_(const int* n, double* x, const int* incx,
double* y, const int* incy, double* ssmin);
extern void
dlapmt_(const int* forwrd, const int* m, const int* n,
double* x, const int* ldx, const int* k);
extern double
dlapy2_(const double* x, const double* y);
extern double
dlapy3_(const double* x, const double* y, const double* z);
extern void
dlaqgb_(const int* m, const int* n,
const int* kl, const int* ku,
double* ab, const int* ldab,
double* r, double* c,
double* rowcnd, double* colcnd,
const double* amax, char* equed);
extern void
dlaqge_(const int* m, const int* n,
double* a, const int* lda,
double* r, double* c,
double* rowcnd, double* colcnd,
const double* amax, char* equed);
extern void
dlaqsb_(const char* uplo, const int* n, const int* kd,
double* ab, const int* ldab, const double* s,
const double* scond, const double* amax, char* equed);
extern void
dlaqsp_(const char* uplo, const int* n,
double* ap, const double* s, const double* scond,
const double* amax, int* equed);
extern void
dlaqsy_(const char* uplo, const int* n,
double* a, const int* lda,
const double* s, const double* scond,
const double* amax, int* equed);
extern void
dlaqtr_(const int* ltran, const int* lreal, const int* n,
const double* t, const int* ldt,
const double* b, const double* w,
double* scale, double* x, double* work, int* info);
extern void
dlar2v_(const int* n, double* x, double* y,
double* z, const int* incx,
const double* c, const double* s,
const int* incc);
extern void
dlarf_(const char* side, const int* m, const int* n,
const double* v, const int* incv, const double* tau,
double* c, const int* ldc, double* work);
extern void
dlarfb_(const char* side, const char* trans,
const char* direct, const char* storev,
const int* m, const int* n, const int* k,
const double* v, const int* ldv,
const double* t, const int* ldt,
double* c, const int* ldc,
double* work, const int* lwork);
extern void
dlarfg_(const int* n, const double* alpha,
double* x, const int* incx, double* tau);
extern void
dlarft_(const char* direct, const char* storev,
const int* n, const int* k, double* v, const int* ldv,
const double* tau, double* t, const int* ldt);
extern void
dlarfx_(const char* side, const int* m, const int* n,
const double* v, const double* tau,
double* c, const int* ldc, double* work);
extern void
dlargv_(const int* n, double* x, const int* incx,
double* y, const int* incy, double* c, const int* incc);
extern void
dlarnv_(const int* idist, int* iseed, const int* n, double* x);
extern void
dlartg_(const double* f, const double* g, double* cs,
double* sn, double *r);
extern void
dlartv_(const int* n, double* x, const int* incx,
double* y, const int* incy,
const double* c, const double* s,
const int* incc);
extern void
dlaruv_(int* iseed, const int* n, double* x);
extern void
dlas2_(const double* f, const double* g, const double* h,
double* ssmin, double* ssmax);
extern void
dlascl_(const char* type,
const int* kl,const int* ku,
double* cfrom, double* cto,
const int* m, const int* n,
double* a, const int* lda, int* info);
extern void
dlaset_(const char* uplo, const int* m, const int* n,
const double* alpha, const double* Rf_beta,
double* a, const int* lda);
extern void
dlasq1_(const int* n, double* d, double* e,
double* work, int* info);
extern void
dlasq2_(const int* m, double* q, double* e,
double* qq, double* ee, const double* eps,
const double* tol2, const double* small2,
double* sup, int* kend, int* info);
extern void
dlasq3_(int* n, double* q, double* e, double* qq,
double* ee, double* sup, double *sigma,
int* kend, int* off, int* iphase,
const int* iconv, const double* eps,
const double* tol2, const double* small2);
extern void
dlasq4_(const int* n, const double* q, const double* e,
double* tau, double* sup);
extern void
dlasr_(const char* side, const char* pivot,
const char* direct, const int* m, const int* n,
const double* c, const double* s,
double* a, const int* lda);
extern void
dlasrt_(const char* id, const int* n, double* d, int* info);
extern void
dlassq_(const int* n, const double* x, const int* incx,
double* scale, double* sumsq);
extern void
dlasv2_(const double* f, const double* g, const double* h,
double* ssmin, double* ssmax, double* snr, double* csr,
double* snl, double* csl);
extern void
dlaswp_(const int* n, double* a, const int* lda,
const int* k1, const int* k2,
const int* ipiv, const int* incx);
extern void
dlasy2_(const int* ltranl, const int* ltranr,
const int* isgn, const int* n1, const int* n2,
const double* tl, const int* ldtl,
const double* tr, const int* ldtr,
const double* b, const int* ldb,
double* scale, double* x, const int* ldx,
double* xnorm, int* info);
extern void
dlasyf_(const char* uplo, const int* n,
const int* nb, const int* kb,
double* a, const int* lda, int* ipiv,
double* w, const int* ldw, int* info);
extern void
dlatbs_(const char* uplo, const char* trans,
const char* diag, const char* normin,
const int* n, const int* kd,
const double* ab, const int* ldab,
double* x, double* scale, double* cnorm, int* info);
extern void
dlatps_(const char* uplo, const char* trans,
const char* diag, const char* normin,
const int* n, const double* ap,
double* x, double* scale, double* cnorm, int* info);
extern void
dlatrd_(const char* uplo, const int* n, const int* nb,
double* a, const int* lda, double* e, double* tau,
double* w, const int* ldw);
extern void
dlatrs_(const char* uplo, const char* trans,
const char* diag, const char* normin,
const int* n, const double* a, const int* lda,
double* x, double* scale, double* cnorm, int* info);
extern void
dlatzm_(const char* side, const int* m, const int* n,
const double* v, const int* incv,
const double* tau, double* c1, double* c2,
const int* ldc, double* work);
extern void
dlauu2_(const char* uplo, const int* n,
double* a, const int* lda, int* info);
extern void
dlauum_(const char* uplo, const int* n,
double* a, const int* lda, int* info);
extern int
izmax1_(const int *n, Rcomplex *cx, const int *incx);
extern void
zgecon_(const char *norm, const int *n,
const Rcomplex *a, const int *lda,
const double *anorm, double *rcond,
Rcomplex *work, double *rwork, int *info);
extern void
zgesv_(const int *n, const int *nrhs, Rcomplex *a,
const int *lda, int *ipiv, Rcomplex *b,
const int *ldb, int *info);
extern void
zgeqp3_(const int *m, const int *n,
Rcomplex *a, const int *lda,
int *jpvt, Rcomplex *tau,
Rcomplex *work, const int *lwork,
double *rwork, int *info);
extern void
zunmqr_(const char *side, const char *trans,
const int *m, const int *n, const int *k,
Rcomplex *a, const int *lda,
Rcomplex *tau,
Rcomplex *c, const int *ldc,
Rcomplex *work, const int *lwork, int *info);
extern void
ztrtrs_(const char *uplo, const char *trans, const char *diag,
const int *n, const int *nrhs,
Rcomplex *a, const int *lda,
Rcomplex *b, const int *ldb, int *info);
extern void
zgesvd_(const char *jobu, const char *jobvt,
const int *m, const int *n,
Rcomplex *a, const int *lda, double *s,
Rcomplex *u, const int *ldu,
Rcomplex *vt, const int *ldvt,
Rcomplex *work, const int *lwork, double *rwork,
int *info);
extern void
zheev_(const char *jobz, const char *uplo,
const int *n, Rcomplex *a, const int *lda,
double *w, Rcomplex *work, const int *lwork,
double *rwork, int *info);
extern void
zgeev_(const char *jobvl, const char *jobvr,
const int *n, Rcomplex *a, const int *lda,
Rcomplex *wr, Rcomplex *vl, const int *ldvl,
Rcomplex *vr, const int *ldvr,
Rcomplex *work, const int *lwork,
double *rwork, int *info);
extern double
dzsum1_(const int *n, Rcomplex *CX, const int *incx);
extern void
zlacn2_(const int *n, Rcomplex *v, Rcomplex *x,
double *est, int *kase, int *isave);
extern double
zlantr_(const char *norm, const char *uplo, const char *diag,
const int *m, const int *n, Rcomplex *a,
const int *lda, double *work);
extern void
dbdsdc_(char *uplo, char *compq, int *n, double *
d, double *e, double *u, int *ldu, double *vt,
int *ldvt, double *q, int *iq, double *work, int * iwork, int *info);
extern void
dgegs_(char *jobvsl, char *jobvsr, int *n,
double *a, int *lda, double *b, int *ldb, double *
alphar, double *alphai, double *Rf_beta, double *vsl,
int *ldvsl, double *vsr, int *ldvsr, double *work,
int *lwork, int *info);
extern void
dgelsd_(int *m, int *n, int *nrhs,
double *a, int *lda, double *b, int *ldb, double *
s, double *rcond, int *rank, double *work, int *lwork,
int *iwork, int *info);
extern void
dgelsx_(int *m, int *n, int *nrhs,
double *a, int *lda, double *b, int *ldb, int *
jpvt, double *rcond, int *rank, double *work, int *
info);
extern void
dgesc2_(int *n, double *a, int *lda,
double *rhs, int *ipiv, int *jpiv, double *scale);
extern void
dgesdd_(const char *jobz,
const int *m, const int *n,
double *a, const int *lda, double *s,
double *u, const int *ldu,
double *vt, const int *ldvt,
double *work, const int *lwork, int *iwork, int *info);
extern void
dgetc2_(int *n, double *a, int *lda, int
*ipiv, int *jpiv, int *info);
typedef int (*L_fp)();
extern void
dggesx_(char *jobvsl, char *jobvsr, char *sort, L_fp
delctg, char *sense, int *n, double *a, int *lda,
double *b, int *ldb, int *sdim, double *alphar,
double *alphai, double *Rf_beta, double *vsl, int *ldvsl,
double *vsr, int *ldvsr, double *rconde, double *
rcondv, double *work, int *lwork, int *iwork, int *
liwork, int *bwork, int *info);
extern void
dggev_(char *jobvl, char *jobvr, int *n, double *
a, int *lda, double *b, int *ldb, double *alphar,
double *alphai, double *Rf_beta, double *vl, int *ldvl,
double *vr, int *ldvr, double *work, int *lwork,
int *info);
extern void
dggevx_(char *balanc, char *jobvl, char *jobvr, char *
sense, int *n, double *a, int *lda, double *b,
int *ldb, double *alphar, double *alphai, double *
Rf_beta, double *vl, int *ldvl, double *vr, int *ldvr,
int *ilo, int *ihi, double *lscale, double *rscale,
double *abnrm, double *bbnrm, double *rconde, double *
rcondv, double *work, int *lwork, int *iwork, int *
bwork, int *info);
extern void
dggsvp_(char *jobu, char *jobv, char *jobq, int *m,
int *p, int *n, double *a, int *lda, double *b,
int *ldb, double *tola, double *tolb, int *k, int
*l, double *u, int *ldu, double *v, int *ldv,
double *q, int *ldq, int *iwork, double *tau,
double *work, int *info);
extern void
dgtts2_(int *itrans, int *n, int *nrhs,
double *dl, double *d, double *du, double *du2,
int *ipiv, double *b, int *ldb);
extern void
dlagv2_(double *a, int *lda, double *b, int *ldb, double *alphar,
double *alphai, double * Rf_beta, double *csl, double *snl,
double *csr, double * snr);
extern void
dlals0_(int *icompq, int *nl, int *nr,
int *sqre, int *nrhs, double *b, int *ldb, double
*bx, int *ldbx, int *perm, int *givptr, int *givcol,
int *ldgcol, double *givnum, int *ldgnum, double *
poles, double *difl, double *difr, double *z, int *
k, double *c, double *s, double *work, int *info);
extern void
dlalsa_(int *icompq, int *smlsiz, int *n,
int *nrhs, double *b, int *ldb, double *bx, int *
ldbx, double *u, int *ldu, double *vt, int *k,
double *difl, double *difr, double *z, double *
poles, int *givptr, int *givcol, int *ldgcol, int *
perm, double *givnum, double *c, double *s, double *
work, int *iwork, int *info);
extern void
dlalsd_(char *uplo, int *smlsiz, int *n, int
*nrhs, double *d, double *e, double *b, int *ldb,
double *rcond, int *rank, double *work, int *iwork,
int *info);
extern void
dlamc1_(int *Rf_beta, int *t, int *rnd, int
*ieee1);
extern void
dlamc2_(int *Rf_beta, int *t, int *rnd,
double *eps, int *emin, double *rmin, int *emax,
double *rmax);
extern double
dlamc3_(double *a, double *b);
extern void
dlamc4_(int *emin, double *start, int *base);
extern void
dlamc5_(int *Rf_beta, int *p, int *emin,
int *ieee, int *emax, double *rmax);
extern void
dlaqp2_(int *m, int *n, int *offset,
double *a, int *lda, int *jpvt, double *tau,
double *vn1, double *vn2, double *work);
extern void
dlaqps_(int *m, int *n, int *offset, int
*nb, int *kb, double *a, int *lda, int *jpvt,
double *tau, double *vn1, double *vn2, double *auxv,
double *f, int *ldf);
extern void
dlar1v_(int *n, int *b1, int *bn, double
*sigma, double *d, double *l, double *ld, double *
lld, double *gersch, double *z, double *ztz, double
*mingma, int *r, int *isuppz, double *work);
extern void
dlarrb_(int *n, double *d, double *l,
double *ld, double *lld, int *ifirst, int *ilast,
double *sigma, double *reltol, double *w, double *
wgap, double *werr, double *work, int *iwork, int *
info);
extern void
dlarre_(int *n, double *d, double *e,
double *tol, int *nsplit, int *isplit, int *m,
double *w, double *woff, double *gersch, double *work,
int *info);
extern void
dlarrf_(int *n, double *d, double *l,
double *ld, double *lld, int *ifirst, int *ilast,
double *w, double *dplus, double *lplus, double *work,
int *iwork, int *info);
extern void
dlarrv_(int *n, double *d, double *l,
int *isplit, int *m, double *w, int *iblock,
double *gersch, double *tol, double *z, int *ldz,
int *isuppz, double *work, int *iwork, int *info);
extern void
dlarz_(char *side, int *m, int *n, int *l,
double *v, int *incv, double *tau, double *c,
int *ldc, double *work);
extern void
dlarzb_(char *side, char *trans, char *direct, char *
storev, int *m, int *n, int *k, int *l, double *v,
int *ldv, double *t, int *ldt, double *c, int *
ldc, double *work, int *ldwork);
extern void
dlarzt_(char *direct, char *storev, int *n, int *
k, double *v, int *ldv, double *tau, double *t,
int *ldt);
extern void
dlasd0_(int *n, int *sqre, double *d,
double *e, double *u, int *ldu, double *vt, int *
ldvt, int *smlsiz, int *iwork, double *work, int *
info);
extern void
dlasd1_(int *nl, int *nr, int *sqre,
double *d, double *alpha, double *Rf_beta, double *u,
int *ldu, double *vt, int *ldvt, int *idxq, int *
iwork, double *work, int *info);
extern void
dlasd2_(int *nl, int *nr, int *sqre, int
*k, double *d, double *z, double *alpha, double *
Rf_beta, double *u, int *ldu, double *vt, int *ldvt,
double *dsigma, double *u2, int *ldu2, double *vt2,
int *ldvt2, int *idxp, int *idx, int *idxc, int *
idxq, int *coltyp, int *info);
extern void
dlasd3_(int *nl, int *nr, int *sqre, int
*k, double *d, double *q, int *ldq, double *dsigma,
double *u, int *ldu, double *u2, int *ldu2,
double *vt, int *ldvt, double *vt2, int *ldvt2,
int *idxc, int *ctot, double *z, int *info);
extern void
dlasd4_(int *n, int *i, double *d,
double *z, double *delta, double *rho, double *
sigma, double *work, int *info);
extern void
dlasd5_(int *i, double *d, double *z,
double *delta, double *rho, double *dsigma, double *
work);
extern void
dlasd6_(int *icompq, int *nl, int *nr,
int *sqre, double *d, double *vf, double *vl,
double *alpha, double *Rf_beta, int *idxq, int *perm,
int *givptr, int *givcol, int *ldgcol, double *givnum,
int *ldgnum, double *poles, double *difl, double *
difr, double *z, int *k, double *c, double *s,
double *work, int *iwork, int *info);
extern void
dlasd7_(int *icompq, int *nl, int *nr,
int *sqre, int *k, double *d, double *z,
double *zw, double *vf, double *vfw, double *vl,
double *vlw, double *alpha, double *Rf_beta, double *
dsigma, int *idx, int *idxp, int *idxq, int *perm,
int *givptr, int *givcol, int *ldgcol, double *givnum,
int *ldgnum, double *c, double *s, int *info);
extern void
dlasd8_(int *icompq, int *k, double *d,
double *z, double *vf, double *vl, double *difl,
double *difr, int *lddifr, double *dsigma, double *
work, int *info);
extern void
dlasd9_(int *icompq, int *ldu, int *k,
double *d, double *z, double *vf, double *vl,
double *difl, double *difr, double *dsigma, double *
work, int *info);
extern void
dlasda_(int *icompq, int *smlsiz, int *n,
int *sqre, double *d, double *e, double *u, int
*ldu, double *vt, int *k, double *difl, double *difr,
double *z, double *poles, int *givptr, int *givcol,
int *ldgcol, int *perm, double *givnum, double *c,
double *s, double *work, int *iwork, int *info);
extern void
dlasdq_(char *uplo, int *sqre, int *n, int *
ncvt, int *nru, int *ncc, double *d, double *e,
double *vt, int *ldvt, double *u, int *ldu,
double *c, int *ldc, double *work, int *info);
extern void
dlasdt_(int *n, int *lvl, int *nd, int *
inode, int *ndiml, int *ndimr, int *msub);
extern void
dlasq5_(int *i0, int *n0, double *z,
int *pp, double *tau, double *dmin, double *dmin1,
double *dmin2, double *dn, double *dnm1, double *dnm2,
int *ieee);
extern void
dlasq6_(int *i0, int *n0, double *z,
int *pp, double *dmin, double *dmin1, double *dmin2,
double *dn, double *dnm1, double *dnm2);
extern void
dlatdf_(int *ijob, int *n, double *z,
int *ldz, double *rhs, double *rdsum, double *rdscal,
int *ipiv, int *jpiv);
extern void
dlatrz_(int *m, int *n, int *l, double *
a, int *lda, double *tau, double *work);
extern void
dormr3_(char *side, char *trans, int *m, int *n,
int *k, int *l, double *a, int *lda, double *tau,
double *c, int *ldc, double *work, int *info);
extern void
dormrz_(char *side, char *trans, int *m, int *n,
int *k, int *l, double *a, int *lda, double *tau,
double *c, int *ldc, double *work, int *lwork,
int *info);
extern void
dptts2_(int *n, int *nrhs, double *d,
double *e, double *b, int *ldb);
extern void
dsbgvd_(char *jobz, char *uplo, int *n, int *ka,
int *kb, double *ab, int *ldab, double *bb, int *
ldbb, double *w, double *z, int *ldz, double *work,
int *lwork, int *iwork, int *liwork, int *info);
extern void
dsbgvx_(char *jobz, char *range, char *uplo, int *n,
int *ka, int *kb, double *ab, int *ldab, double *
bb, int *ldbb, double *q, int *ldq, double *vl,
double *vu, int *il, int *iu, double *abstol, int
*m, double *w, double *z, int *ldz, double *work,
int *iwork, int *ifail, int *info);
extern void
dspgvd_(int *itype, char *jobz, char *uplo, int *
n, double *ap, double *bp, double *w, double *z,
int *ldz, double *work, int *lwork, int *iwork,
int *liwork, int *info);
extern void
dspgvx_(int *itype, char *jobz, char *range, char *
uplo, int *n, double *ap, double *bp, double *vl,
double *vu, int *il, int *iu, double *abstol, int
*m, double *w, double *z, int *ldz, double *work,
int *iwork, int *ifail, int *info);
extern void
dstegr_(char *jobz, char *range, int *n, double *
d, double *e, double *vl, double *vu, int *il,
int *iu, double *abstol, int *m, double *w,
double *z, int *ldz, int *isuppz, double *work,
int *lwork, int *iwork, int *liwork, int *info);
extern void
dstevr_(char *jobz, char *range, int *n, double *
d, double *e, double *vl, double *vu, int *il,
int *iu, double *abstol, int *m, double *w,
double *z, int *ldz, int *isuppz, double *work,
int *lwork, int *iwork, int *liwork, int *info);
extern void
dsygvd_(int *itype, char *jobz, char *uplo, int *
n, double *a, int *lda, double *b, int *ldb,
double *w, double *work, int *lwork, int *iwork,
int *liwork, int *info);
extern void
dsygvx_(int *itype, char *jobz, char *range, char *
uplo, int *n, double *a, int *lda, double *b, int
*ldb, double *vl, double *vu, int *il, int *iu,
double *abstol, int *m, double *w, double *z,
int *ldz, double *work, int *lwork, int *iwork,
int *ifail, int *info);
extern void
dtgex2_(int *wantq, int *wantz, int *n,
double *a, int *lda, double *b, int *ldb, double *
q, int *ldq, double *z, int *ldz, int *j1, int *
n1, int *n2, double *work, int *lwork, int *info);
extern void
dtgexc_(int *wantq, int *wantz, int *n,
double *a, int *lda, double *b, int *ldb, double *
q, int *ldq, double *z, int *ldz, int *ifst,
int *ilst, double *work, int *lwork, int *info);
extern void
dtgsen_(int *ijob, int *wantq, int *wantz,
int *select, int *n, double *a, int *lda, double *
b, int *ldb, double *alphar, double *alphai, double *
Rf_beta, double *q, int *ldq, double *z, int *ldz,
int *m, double *pl, double *pr, double *dif,
double *work, int *lwork, int *iwork, int *liwork,
int *info);
extern void
dtgsna_(char *job, char *howmny, int *select,
int *n, double *a, int *lda, double *b, int *ldb,
double *vl, int *ldvl, double *vr, int *ldvr,
double *s, double *dif, int *mm, int *m, double *
work, int *lwork, int *iwork, int *info);
extern void
dtgsy2_(char *trans, int *ijob, int *m, int *
n, double *a, int *lda, double *b, int *ldb,
double *c, int *ldc, double *d, int *ldd,
double *e, int *lde, double *f, int *ldf, double *
scale, double *rdsum, double *rdscal, int *iwork, int
*pq, int *info);
extern void
dtgsyl_(char *trans, int *ijob, int *m, int *
n, double *a, int *lda, double *b, int *ldb,
double *c, int *ldc, double *d, int *ldd,
double *e, int *lde, double *f, int *ldf, double *
scale, double *dif, double *work, int *lwork, int *
iwork, int *info);
extern void
dtzrzf_(int *m, int *n, double *a, int *
lda, double *tau, double *work, int *lwork, int *info);
extern void
dpstrf_(const char* uplo, const int* n,
double* a, const int* lda, int* piv, int* rank,
double* tol, double *work, int* info);
extern int
lsame_(char *ca, char *cb);
extern void
zbdsqr_(char *uplo, int *n, int *ncvt, int *
nru, int *ncc, double *d, double *e, Rcomplex *vt,
int *ldvt, Rcomplex *u, int *ldu, Rcomplex *c,
int *ldc, double *rwork, int *info);
extern void
zdrot_(int *n, Rcomplex *cx, int *incx,
Rcomplex *cy, int *incy, double *c, double *s);
extern void
zgebak_(char *job, char *side, int *n, int *ilo,
int *ihi, double *scale, int *m, Rcomplex *v,
int *ldv, int *info);
extern void
zgebal_(char *job, int *n, Rcomplex *a, int
*lda, int *ilo, int *ihi, double *scale, int *info);
extern void
zgebd2_(int *m, int *n, Rcomplex *a,
int *lda, double *d, double *e, Rcomplex *tauq,
Rcomplex *taup, Rcomplex *work, int *info);
extern void
zgebrd_(int *m, int *n, Rcomplex *a,
int *lda, double *d, double *e, Rcomplex *tauq,
Rcomplex *taup, Rcomplex *work, int *lwork, int *
info);
extern void
zgehd2_(int *n, int *ilo, int *ihi,
Rcomplex *a, int *lda, Rcomplex *tau, Rcomplex *
work, int *info);
extern void
zgehrd_(int *n, int *ilo, int *ihi,
Rcomplex *a, int *lda, Rcomplex *tau, Rcomplex *
work, int *lwork, int *info);
extern void
zgelq2_(int *m, int *n, Rcomplex *a,
int *lda, Rcomplex *tau, Rcomplex *work, int *info);
extern void
zgelqf_(int *m, int *n, Rcomplex *a,
int *lda, Rcomplex *tau, Rcomplex *work, int *lwork,
int *info);
extern void
zgeqr2_(int *m, int *n, Rcomplex *a,
int *lda, Rcomplex *tau, Rcomplex *work, int *info);
extern void
zgeqrf_(int *m, int *n, Rcomplex *a,
int *lda, Rcomplex *tau, Rcomplex *work, int *lwork,
int *info);
extern void
zgetf2_(int *m, int *n, Rcomplex *a,
int *lda, int *ipiv, int *info);
extern void
zgetrf_(int *m, int *n, Rcomplex *a,
int *lda, int *ipiv, int *info);
extern void
zgetrs_(char *trans, int *n, int *nrhs,
Rcomplex *a, int *lda, int *ipiv, Rcomplex *b,
int *ldb, int *info);
extern void
zhetd2_(char *uplo, int *n, Rcomplex *a, int *lda, double *d,
double *e, Rcomplex *tau, int *info);
extern void
zhetrd_(char *uplo, int *n, Rcomplex *a,
int *lda, double *d, double *e, Rcomplex *tau,
Rcomplex *work, int *lwork, int *info);
extern void
zhseqr_(char *job, char *compz, int *n, int *ilo,
int *ihi, Rcomplex *h, int *ldh, Rcomplex *w,
Rcomplex *z, int *ldz, Rcomplex *work, int *lwork,
int *info);
extern void
zlabrd_(int *m, int *n, int *nb,
Rcomplex *a, int *lda, double *d, double *e,
Rcomplex *tauq, Rcomplex *taup, Rcomplex *x, int *
ldx, Rcomplex *y, int *ldy);
extern void
zlacgv_(int *n, Rcomplex *x, int *incx);
extern void
zlacpy_(char *uplo, int *m, int *n,
Rcomplex *a, int *lda, Rcomplex *b, int *ldb);
extern void
zlahqr_(int *wantt, int *wantz, int *n,
int *ilo, int *ihi, Rcomplex *h, int *ldh,
Rcomplex *w, int *iloz, int *ihiz, Rcomplex *z,
int *ldz, int *info);
extern void
zlahrd_(int *n, int *k, int *nb,
Rcomplex *a, int *lda, Rcomplex *tau, Rcomplex *t,
int *ldt, Rcomplex *y, int *ldy);
extern double
zlange_(char *norm, int *m, int *n, Rcomplex *a, int *lda,
double *work);
extern double
zlanhe_(char *norm, char *uplo, int *n, Rcomplex *a,
int *lda, double *work);
extern double
zlanhs_(char *norm, int *n, Rcomplex *a, int *lda, double *work);
extern void
zlaqp2_(int *m, int *n, int *offset,
Rcomplex *a, int *lda, int *jpvt, Rcomplex *tau,
double *vn1, double *vn2, Rcomplex *work);
extern void
zlaqps_(int *m, int *n, int *offset, int
*nb, int *kb, Rcomplex *a, int *lda, int *jpvt,
Rcomplex *tau, double *vn1, double *vn2, Rcomplex *
auxv, Rcomplex *f, int *ldf);
extern void
zlarf_(char *side, int *m, int *n, Rcomplex
*v, int *incv, Rcomplex *tau, Rcomplex *c, int *
ldc, Rcomplex *work);
extern void
zlarfb_(char *side, char *trans, char *direct, char *
storev, int *m, int *n, int *k, Rcomplex *v, int
*ldv, Rcomplex *t, int *ldt, Rcomplex *c, int *
ldc, Rcomplex *work, int *ldwork);
extern void
zlarfg_(int *n, Rcomplex *alpha, Rcomplex *
x, int *incx, Rcomplex *tau);
extern void
zlarft_(char *direct, char *storev, int *n, int *
k, Rcomplex *v, int *ldv, Rcomplex *tau, Rcomplex *
t, int *ldt);
extern void
zlarfx_(char *side, int *m, int *n,
Rcomplex *v, Rcomplex *tau, Rcomplex *c, int *
ldc, Rcomplex *work);
extern void
zlascl_(char *type, int *kl, int *ku,
double *cfrom, double *cto, int *m, int *n,
Rcomplex *a, int *lda, int *info);
extern void
zlaset_(char *uplo, int *m, int *n,
Rcomplex *alpha, Rcomplex *Rf_beta, Rcomplex *a, int *
lda);
extern void
zlasr_(char *side, char *pivot, char *direct, int *m,
int *n, double *c, double *s, Rcomplex *a,
int *lda);
extern void
zlassq_(int *n, Rcomplex *x, int *incx,
double *scale, double *sumsq);
extern void
zlaswp_(int *n, Rcomplex *a, int *lda,
int *k1, int *k2, int *ipiv, int *incx);
extern void
zlatrd_(char *uplo, int *n, int *nb,
Rcomplex *a, int *lda, double *e, Rcomplex *tau,
Rcomplex *w, int *ldw);
extern void
zlatrs_(char *uplo, char *trans, char *diag, char *
normin, int *n, Rcomplex *a, int *lda, Rcomplex *x,
double *scale, double *cnorm, int *info);
extern void
zsteqr_(char *compz, int *n, double *d,
double *e, Rcomplex *z, int *ldz, double *work,
int *info);
extern void
ztrcon_(const char *norm, const char *uplo, const char *diag,
const int *n, const Rcomplex *a, const int *lda,
double *rcond, Rcomplex *work, double *rwork, int *info);
extern void
ztrevc_(char *side, char *howmny, int *select,
int *n, Rcomplex *t, int *ldt, Rcomplex *vl,
int *ldvl, Rcomplex *vr, int *ldvr, int *mm, int
*m, Rcomplex *work, double *rwork, int *info);
extern void
zung2l_(int *m, int *n, int *k,
Rcomplex *a, int *lda, Rcomplex *tau, Rcomplex *
work, int *info);
extern void
zung2r_(int *m, int *n, int *k,
Rcomplex *a, int *lda, Rcomplex *tau, Rcomplex *
work, int *info);
extern void
zungbr_(char *vect, int *m, int *n, int *k,
Rcomplex *a, int *lda, Rcomplex *tau, Rcomplex *
work, int *lwork, int *info);
extern void
zunghr_(int *n, int *ilo, int *ihi,
Rcomplex *a, int *lda, Rcomplex *tau, Rcomplex *
work, int *lwork, int *info);
extern void
zungl2_(int *m, int *n, int *k,
Rcomplex *a, int *lda, Rcomplex *tau, Rcomplex *
work, int *info);
extern void
zunglq_(int *m, int *n, int *k,
Rcomplex *a, int *lda, Rcomplex *tau, Rcomplex *
work, int *lwork, int *info);
extern void
zungql_(int *m, int *n, int *k,
Rcomplex *a, int *lda, Rcomplex *tau, Rcomplex *
work, int *lwork, int *info);
extern void
zungqr_(int *m, int *n, int *k,
Rcomplex *a, int *lda, Rcomplex *tau, Rcomplex *
work, int *lwork, int *info);
extern void
zungr2_(int *m, int *n, int *k,
Rcomplex *a, int *lda, Rcomplex *tau, Rcomplex *
work, int *info);
extern void
zungrq_(int *m, int *n, int *k,
Rcomplex *a, int *lda, Rcomplex *tau, Rcomplex *
work, int *lwork, int *info);
extern void
zungtr_(char *uplo, int *n, Rcomplex *a,
int *lda, Rcomplex *tau, Rcomplex *work, int *lwork,
int *info);
extern void
zunm2r_(char *side, char *trans, int *m, int *n,
int *k, Rcomplex *a, int *lda, Rcomplex *tau,
Rcomplex *c, int *ldc, Rcomplex *work, int *info);
extern void
zunmbr_(char *vect, char *side, char *trans, int *m,
int *n, int *k, Rcomplex *a, int *lda, Rcomplex
*tau, Rcomplex *c, int *ldc, Rcomplex *work, int *
lwork, int *info);
extern void
zunml2_(char *side, char *trans, int *m, int *n,
int *k, Rcomplex *a, int *lda, Rcomplex *tau,
Rcomplex *c, int *ldc, Rcomplex *work, int *info);
extern void
zunmlq_(char *side, char *trans, int *m, int *n,
int *k, Rcomplex *a, int *lda, Rcomplex *tau,
Rcomplex *c, int *ldc, Rcomplex *work, int *lwork,
int *info);
extern void
zgesdd_(const char *jobz,
const int *m, const int *n,
Rcomplex *a, const int *lda, double *s,
Rcomplex *u, const int *ldu,
Rcomplex *vt, const int *ldvt,
Rcomplex *work, const int *lwork, double *rwork,
int *iwork, int *info);
extern void
zgelsd_(int *m, int *n, int *nrhs,
Rcomplex *a, int *lda, Rcomplex *b, int *ldb, double *s,
double *rcond, int *rank,
Rcomplex *work, int *lwork, double *rwork, int *iwork, int *info);
}
# /Users/ls/Source/git/fastr/com.oracle.truffle.r.native/include/R_ext/Linpack.h
extern "C" {
extern void dpbfa_(double*, int*, int*, int*, int*); // dpbfa_ unused
extern void dpbsl_(double*, int*, int*, int*, double*); // dpbsl_ unused
extern void dpoco_(double*, int*, int*, double*, double*, int*); // dpoco_ used 4 times in locpol
extern void dpodi_(double*, int*, int*, double*, int*); // dpodi_ used 3 times in locpol
extern void dpofa_(double*, int*, int*, int*); // dpofa_ unused
extern void dposl_(double*, int*, int*, double*); // dposl_ used 4 times in locpol
extern void dqrdc_(double*, int*, int*, int*, double*, int*, double*, int*); // dqrdc_ unused
extern void dqrsl_(double*, int*, int*, int*, double*, double*, double*, double*, double*, double*, double*, int*, int*); // dqrsl_ used 3 times in earth
extern void dsvdc_(double*, int*, int*, int*, double*, double*, double*, int*, double*, int*, double*, int*, int*); // dsvdc_ unused
extern void dtrco_(double*, int*, int*, double*, double*, int*); // dtrco_ unused
extern void dtrsl_(double*, int*, int*, double*, int*, int*); // dtrsl_ used 2 times in earth
extern void dchdc_(double*, int*, int*, double*, int*, int*, int*); // dchdc_ unused
extern void dchdd_(double*, int*, int*, double*, double*, int*, int*, double*, double*, double*, double*, int*); // dchdd_ unused
extern void dchex_(double*, int*, int*, int*, int*, double*, int*, int*, double*, double*, int*); // dchex_ unused
extern void dchud_(double*, int*, int*, double*, double*, int*, int*, double*, double*, double*, double*); // dchud_ unused
extern void dgbco_(double*, int*, int*, int*, int*, int*, double*, double*); // dgbco_ unused
extern void dgbdi_(double*, int*, int*, int*, int*, int*, double*); // dgbdi_ unused
extern void dgbfa_(double*, int*, int*, int*, int*, int*, int*); // dgbfa_ unused
extern void dgbsl_(double*, int*, int*, int*, int*, int*, double*, int*); // dgbsl_ unused
extern void dgeco_(double*, int*, int*, int*, double*, double*); // dgeco_ unused
extern void dgedi_(double*, int*, int*, int*, double*, double*, int*); // dgedi_ unused
extern void dgefa_(double*, int*, int*, int*, int*); // dgefa_ unused
extern void dgesl_(double*, int*, int*, int*, double*, int*); // dgesl_ unused
extern void dgtsl_(int*, double*, double*, double*, double*, int*); // dgtsl_ unused
extern void dpbco_(double*, int*, int*, int*, double*, double*, int*); // dpbco_ unused
extern void dpbdi_(double*, int*, int*, int*, double*); // dpbdi_ unused
extern void dppco_(double*, int*, double*, double*, int*); // dppco_ unused
extern void dppdi_(double*, int*, double*, int*); // dppdi_ unused
extern void dppfa_(double*, int*, int*); // dppfa_ unused
extern void dppsl_(double*, int*, double*); // dppsl_ unused
extern void dptsl_(int*, double*, double*, double*); // dptsl_ unused
extern void dsico_(double*, int*, int*, int*, double*, double*); // dsico_ unused
extern void dsidi_(double*, int*, int*, int*, double*, int*, double*, int*); // dsidi_ unused
extern void dsifa_(double*, int*, int*, int*, int*); // dsifa_ unused
extern void dsisl_(double*, int*, int*, int*, double*); // dsisl_ unused
extern void dspco_(double*, int*, int*, double*, double*); // dspco_ unused
extern void dspdi_(double*, int*, int*, double*, int*, double*, int*); // dspdi_ unused
extern void dspfa_(double*, int*, int*, int*); // dspfa_ unused
extern void dspsl_(double*, int*, int*, double*); // dspsl_ unused
}
# /Users/ls/Source/git/fastr/com.oracle.truffle.r.native/include/R_ext/MathThreads.h
extern "C" {
extern int R_num_math_threads; // R_num_math_threads used 2 times in apcluster
extern int R_max_num_math_threads; // R_max_num_math_threads unused
}
# /Users/ls/Source/git/fastr/com.oracle.truffle.r.native/include/R_ext/Memory.h
extern "C" {
void* vmaxget(void); // vmaxget used 279 times in 20 packages
void vmaxset(const void *); // vmaxset used 279 times in 20 packages
void R_gc(void); // R_gc used 6 times in TMB, excel.link, gmatrix, microbenchmark
int R_gc_running(); // R_gc_running unused
char* R_alloc(size_t, int); // R_alloc used 7787 times in 330 packages
long double *R_allocLD(size_t nelem);
char* S_alloc(long, int); // S_alloc used 540 times in 50 packages
char* S_realloc(char *, long, long, int); // S_realloc used 55 times in 11 packages
}
# /Users/ls/Source/git/fastr/com.oracle.truffle.r.native/include/R_ext/Parse.h
extern "C" {
typedef enum {
PARSE_NULL,
PARSE_OK,
PARSE_INCOMPLETE,
PARSE_ERROR,
PARSE_EOF
} ParseStatus; // ParseStatus used 25 times in 11 packages
SEXP R_ParseVector(SEXP, int, ParseStatus *, SEXP); // R_ParseVector used 21 times in 11 packages
}
# /Users/ls/Source/git/fastr/com.oracle.truffle.r.native/include/R_ext/Print.h
extern "C" {
void Rprintf(const char *, ...); // Rprintf used 33813 times in 729 packages
void REprintf(const char *, ...); // REprintf used 2531 times in 135 packages
}
# /Users/ls/Source/git/fastr/com.oracle.truffle.r.native/include/R_ext/PrtUtil.h
extern "C" {
void Rf_formatLogical(int *, R_xlen_t, int *); // Rf_formatLogical unused
// formatLogical used 2 times in qtbase, RGtk2
void Rf_formatInteger(int *, R_xlen_t, int *); // Rf_formatInteger unused
// formatInteger used 2 times in qtbase, RGtk2
void Rf_formatReal(double *, R_xlen_t, int *, int *, int *, int); // Rf_formatReal used 2 times in Rcpp, Rcpp11
// formatReal used 5 times in data.table, qtbase, RGtk2
void Rf_formatComplex(Rcomplex *, R_xlen_t, int *, int *, int *, int *, int *, int *, int); // Rf_formatComplex used 2 times in Rcpp, Rcpp11
// formatComplex unused
const char *Rf_EncodeLogical(int, int); // Rf_EncodeLogical unused
// EncodeLogical used 2 times in qtbase, RGtk2
const char *Rf_EncodeInteger(int, int); // Rf_EncodeInteger unused
// EncodeInteger used 2 times in qtbase, RGtk2
const char *Rf_EncodeReal0(double, int, int, int, const char *); // Rf_EncodeReal0 unused
// EncodeReal0 unused
const char *Rf_EncodeComplex(Rcomplex, int, int, int, int, int, int, const char *); // Rf_EncodeComplex used 2 times in Rcpp, Rcpp11
// EncodeComplex unused
const char *Rf_EncodeReal(double, int, int, int, char); // Rf_EncodeReal used 2 times in Rcpp, Rcpp11
// EncodeReal used 2 times in qtbase, RGtk2
int Rf_IndexWidth(R_xlen_t); // Rf_IndexWidth unused
// IndexWidth unused
void Rf_VectorIndex(R_xlen_t, int); // Rf_VectorIndex unused
// VectorIndex used 6 times in gnmf
void Rf_printIntegerVector(int *, R_xlen_t, int); // Rf_printIntegerVector unused
// printIntegerVector used 2 times in bvpSolve, deTestSet
void Rf_printRealVector (double *, R_xlen_t, int); // Rf_printRealVector unused
// printRealVector used 2 times in bvpSolve, deTestSet
void Rf_printComplexVector(Rcomplex *, R_xlen_t, int); // Rf_printComplexVector unused
// printComplexVector unused
}
# /Users/ls/Source/git/fastr/com.oracle.truffle.r.native/include/R_ext/QuartzDevice.h
extern "C" {
typedef void* QuartzDesc_t;
typedef struct QuartzBackend_s {
int size;
double width, height;
double scalex, scaley, pointsize;
int bg, canvas;
int flags;
void* userInfo;
CGContextRef (*getCGContext)(QuartzDesc_t dev, void*userInfo);
int (*locatePoint)(QuartzDesc_t dev, void*userInfo, double*x, double*y);
void (*close)(QuartzDesc_t dev, void*userInfo);
void (*newPage)(QuartzDesc_t dev, void*userInfo, int flags);
void (*state)(QuartzDesc_t dev, void*userInfo, int state);
void* (*par)(QuartzDesc_t dev, void*userInfo, int set, const char *key, void *value);
void (*sync)(QuartzDesc_t dev, void*userInfo);
void* (*cap)(QuartzDesc_t dev, void*userInfo);
} QuartzBackend_t; // QuartzBackend_t unused
typedef struct QuartzParameters_s {
int size;
const char *type, *file, *title;
double x, y, width, height, pointsize;
const char *family;
int flags;
int connection;
int bg, canvas;
double *dpi;
double pard1, pard2;
int pari1, pari2;
const char *pars1, *pars2;
void *parv;
} QuartzParameters_t; // QuartzParameters_t unused
QuartzDesc_t QuartzDevice_Create(void *dd, QuartzBackend_t* def); // QuartzDevice_Create unused
typedef struct QuartzFunctons_s {
void* (*Create)(void *, QuartzBackend_t *);
int (*DevNumber)(QuartzDesc_t desc);
void (*Kill)(QuartzDesc_t desc);
void (*ResetContext)(QuartzDesc_t desc);
double (*GetWidth)(QuartzDesc_t desc);
double (*GetHeight)(QuartzDesc_t desc);
void (*SetSize)(QuartzDesc_t desc, double width, double height);
double (*GetScaledWidth)(QuartzDesc_t desc);
double (*GetScaledHeight)(QuartzDesc_t desc);
void (*SetScaledSize)(QuartzDesc_t desc, double width, double height);
double (*GetXScale)(QuartzDesc_t desc);
double (*GetYScale)(QuartzDesc_t desc);
void (*SetScale)(QuartzDesc_t desc,double scalex, double scaley);
void (*SetTextScale)(QuartzDesc_t desc,double scale);
double (*GetTextScale)(QuartzDesc_t desc);
void (*SetPointSize)(QuartzDesc_t desc,double ps);
double (*GetPointSize)(QuartzDesc_t desc);
int (*GetDirty)(QuartzDesc_t desc);
void (*SetDirty)(QuartzDesc_t desc,int dirty);
void (*ReplayDisplayList)(QuartzDesc_t desc);
void* (*GetSnapshot)(QuartzDesc_t desc, int last);
void (*RestoreSnapshot)(QuartzDesc_t desc,void* snapshot);
int (*GetAntialias)(QuartzDesc_t desc);
void (*SetAntialias)(QuartzDesc_t desc, int aa);
int (*GetBackground)(QuartzDesc_t desc);
void (*Activate)(QuartzDesc_t desc);
void* (*SetParameter)(QuartzDesc_t desc, const char *key, void *value);
void* (*GetParameter)(QuartzDesc_t desc, const char *key);
} QuartzFunctions_t; // QuartzFunctions_t unused
QuartzFunctions_t *getQuartzFunctions(); // getQuartzFunctions unused
typedef QuartzDesc_t (*quartz_create_fn_t)(void *dd, QuartzFunctions_t *fn, QuartzParameters_t *par);
extern
QuartzDesc_t (*ptr_QuartzBackend)(void *dd, QuartzFunctions_t *fn, QuartzParameters_t *par);
QuartzDesc_t Quartz_C(QuartzParameters_t *par, quartz_create_fn_t q_create, int *errorCode); // Quartz_C unused
}
# /Users/ls/Source/git/fastr/com.oracle.truffle.r.native/include/R_ext/R-ftp-http.h
typedef int_fast64_t DLsize_t; // DLsize_t unused
extern "C" {
void *R_HTTPOpen(const char *url); // R_HTTPOpen unused
int R_HTTPRead(void *ctx, char *dest, int len); // R_HTTPRead unused
void R_HTTPClose(void *ctx); // R_HTTPClose unused
void *R_FTPOpen(const char *url); // R_FTPOpen unused
int R_FTPRead(void *ctx, char *dest, int len); // R_FTPRead unused
void R_FTPClose(void *ctx); // R_FTPClose unused
void * RxmlNanoHTTPOpen(const char *URL, char **contentType, const char *headers, int cacheOK); // RxmlNanoHTTPOpen unused
int RxmlNanoHTTPRead(void *ctx, void *dest, int len); // RxmlNanoHTTPRead unused
void RxmlNanoHTTPClose(void *ctx); // RxmlNanoHTTPClose unused
int RxmlNanoHTTPReturnCode(void *ctx); // RxmlNanoHTTPReturnCode unused
char * RxmlNanoHTTPStatusMsg(void *ctx); // RxmlNanoHTTPStatusMsg unused
DLsize_t RxmlNanoHTTPContentLength(void *ctx); // RxmlNanoHTTPContentLength unused
char * RxmlNanoHTTPContentType(void *ctx); // RxmlNanoHTTPContentType unused
void RxmlNanoHTTPTimeout(int delay); // RxmlNanoHTTPTimeout unused
void * RxmlNanoFTPOpen(const char *URL); // RxmlNanoFTPOpen unused
int RxmlNanoFTPRead(void *ctx, void *dest, int len); // RxmlNanoFTPRead unused
int RxmlNanoFTPClose(void *ctx); // RxmlNanoFTPClose unused
void RxmlNanoFTPTimeout(int delay); // RxmlNanoFTPTimeout unused
DLsize_t RxmlNanoFTPContentLength(void *ctx); // RxmlNanoFTPContentLength unused
void RxmlMessage(int level, const char *format, ...); // RxmlMessage unused
void RxmlNanoFTPCleanup(void); // RxmlNanoFTPCleanup unused
void RxmlNanoHTTPCleanup(void); // RxmlNanoHTTPCleanup unused
}
# /Users/ls/Source/git/fastr/com.oracle.truffle.r.native/include/R_ext/RS.h
extern "C" {
extern void *R_chk_calloc(size_t, size_t); // R_chk_calloc used 6 times in rpart, XML, itree, ifultools, mgcv
extern void *R_chk_realloc(void *, size_t); // R_chk_realloc used 5 times in seqminer, gpuR, ifultools, mgcv
extern void R_chk_free(void *); // R_chk_free used 2 times in mgcv
void call_R(char*, long, void**, char**, long*, char**, long, char**); // call_R used 2 times in PoweR
// call_S used 2 times in locfit
}
# /Users/ls/Source/git/fastr/com.oracle.truffle.r.native/include/R_ext/RStartup.h
extern "C" {
typedef enum {
SA_NORESTORE,
SA_RESTORE,
SA_DEFAULT,
SA_NOSAVE,
SA_SAVE,
SA_SAVEASK,
SA_SUICIDE
} SA_TYPE; // SA_TYPE used 7 times in Rserve, rJava, littler
typedef struct
{
Rboolean R_Quiet;
Rboolean R_Slave;
Rboolean R_Interactive;
Rboolean R_Verbose;
Rboolean LoadSiteFile;
Rboolean LoadInitFile;
Rboolean DebugInitFile;
SA_TYPE RestoreAction;
SA_TYPE SaveAction;
size_t vsize;
size_t nsize;
size_t max_vsize;
size_t max_nsize;
size_t ppsize;
int NoRenviron;
} structRstart; // structRstart used 8 times in RInside, rscproxy, Rserve, rJava, littler
typedef structRstart *Rstart;
void R_DefParams(Rstart); // R_DefParams used 9 times in RInside, rscproxy, Rserve, rJava, littler
void R_SetParams(Rstart); // R_SetParams used 14 times in RInside, rscproxy, Rserve, rJava, littler
void R_SetWin32(Rstart); // R_SetWin32 used 2 times in Rserve, rJava
void R_SizeFromEnv(Rstart); // R_SizeFromEnv used 3 times in Rserve, rJava
void R_common_command_line(int *, char **, Rstart); // R_common_command_line used 3 times in Rserve, rJava
void R_set_command_line_arguments(int argc, char **argv); // R_set_command_line_arguments used 4 times in Rserve, rJava, rscproxy
void setup_Rmainloop(void); // setup_Rmainloop used 6 times in Rserve, rJava, rscproxy
}
# /Users/ls/Source/git/fastr/com.oracle.truffle.r.native/include/R_ext/Rallocators.h
typedef void *(*custom_alloc_t)(R_allocator_t *allocator, size_t);
typedef void (*custom_free_t)(R_allocator_t *allocator, void *);
struct R_allocator {
custom_alloc_t mem_alloc;
custom_free_t mem_free;
void *res;
void *data;
};
# /Users/ls/Source/git/fastr/com.oracle.truffle.r.native/include/R_ext/Random.h
extern "C" {
typedef enum {
WICHMANN_HILL,
MARSAGLIA_MULTICARRY,
SUPER_DUPER,
MERSENNE_TWISTER,
KNUTH_TAOCP,
USER_UNIF,
KNUTH_TAOCP2,
LECUYER_CMRG
} RNGtype; // RNGtype unused
typedef enum {
BUGGY_KINDERMAN_RAMAGE,
AHRENS_DIETER,
BOX_MULLER,
USER_NORM,
INVERSION,
KINDERMAN_RAMAGE
} N01type; // N01type unused
void GetRNGstate(void); // GetRNGstate used 1753 times in 434 packages
void PutRNGstate(void); // PutRNGstate used 1794 times in 427 packages
double unif_rand(void); // unif_rand used 2135 times in 327 packages
double norm_rand(void); // norm_rand used 408 times in 93 packages
double exp_rand(void); // exp_rand used 110 times in 25 packages
typedef unsigned int Int32;
double * user_unif_rand(void); // user_unif_rand used 10 times in randaes, rstream, rngwell19937, SuppDists, randtoolbox, rlecuyer, Rrdrand
void user_unif_init(Int32); // user_unif_init used 5 times in randaes, SuppDists, randtoolbox, rngwell19937
int * user_unif_nseed(void); // user_unif_nseed used 4 times in randaes, SuppDists, rngwell19937
int * user_unif_seedloc(void); // user_unif_seedloc used 4 times in randaes, SuppDists, rngwell19937
double * user_norm_rand(void); // user_norm_rand used 1 times in RcppZiggurat
}
# /Users/ls/Source/git/fastr/com.oracle.truffle.r.native/include/R_ext/Rdynload.h
typedef void * (*DL_FUNC)();
typedef unsigned int R_NativePrimitiveArgType;
typedef unsigned int R_NativeObjectArgType;
typedef enum {R_ARG_IN, R_ARG_OUT, R_ARG_IN_OUT, R_IRRELEVANT} R_NativeArgStyle;
typedef struct {
const char *name;
DL_FUNC fun;
int numArgs;
R_NativePrimitiveArgType *types;
R_NativeArgStyle *styles;
} R_CMethodDef; // R_CMethodDef used 76 times in 73 packages
typedef R_CMethodDef R_FortranMethodDef; // R_FortranMethodDef used 21 times in 20 packages
typedef struct {
const char *name;
DL_FUNC fun;
int numArgs;
} R_CallMethodDef; // R_CallMethodDef used 156 times in 147 packages
typedef R_CallMethodDef R_ExternalMethodDef; // R_ExternalMethodDef used 8 times in devEMF, rgl, data.table, foreign, actuar, xts, Matrix, Rcpp
typedef struct _DllInfo DllInfo;
extern "C" {
int R_registerRoutines(DllInfo *info, const R_CMethodDef * const croutines, // R_registerRoutines used 209 times in 196 packages
const R_CallMethodDef * const callRoutines,
const R_FortranMethodDef * const fortranRoutines,
const R_ExternalMethodDef * const externalRoutines);
Rboolean R_useDynamicSymbols(DllInfo *info, Rboolean value); // R_useDynamicSymbols used 105 times in 102 packages
Rboolean R_forceSymbols(DllInfo *info, Rboolean value); // R_forceSymbols used 14 times in 14 packages
DllInfo *R_getDllInfo(const char *name); // R_getDllInfo unused
DllInfo *R_getEmbeddingDllInfo(void); // R_getEmbeddingDllInfo used 1 times in Rserve
typedef struct Rf_RegisteredNativeSymbol R_RegisteredNativeSymbol;
typedef enum {R_ANY_SYM=0, R_C_SYM, R_CALL_SYM, R_FORTRAN_SYM, R_EXTERNAL_SYM} NativeSymbolType;
DL_FUNC R_FindSymbol(char const *, char const *, // R_FindSymbol used 149 times in ergm, RTextTools, SamplerCompare, network, CCMnet, hergm
R_RegisteredNativeSymbol *symbol);
void R_RegisterCCallable(const char *package, const char *name, DL_FUNC fptr); // R_RegisterCCallable used 1077 times in 49 packages
DL_FUNC R_GetCCallable(const char *package, const char *name); // R_GetCCallable used 1417 times in 41 packages
}
# /Users/ls/Source/git/fastr/com.oracle.truffle.r.native/include/R_ext/Riconv.h
extern "C" {
void * Riconv_open (const char* tocode, const char* fromcode); // Riconv_open used 10 times in devEMF, RCurl, pbdZMQ, ore, Nippon, readr
size_t Riconv (void * cd, const char **inbuf, size_t *inbytesleft, // Riconv used 14 times in devEMF, RCurl, pbdZMQ, ore, Nippon, readr
char **outbuf, size_t *outbytesleft);
int Riconv_close (void * cd); // Riconv_close used 7 times in devEMF, pbdZMQ, ore, Nippon, readr
}
# /Users/ls/Source/git/fastr/com.oracle.truffle.r.native/include/R_ext/Utils.h
extern "C" {
void R_isort(int*, int); // R_isort used 45 times in 18 packages
void R_rsort(double*, int); // R_rsort used 210 times in 29 packages
void R_csort(Rcomplex*, int); // R_csort unused
void rsort_with_index(double *, int *, int); // rsort_with_index used 77 times in 40 packages
void Rf_revsort(double*, int*, int); // Rf_revsort unused
// revsort used 60 times in 20 packages
void Rf_iPsort(int*, int, int); // Rf_iPsort unused
// iPsort used 3 times in matrixStats, robustbase
void Rf_rPsort(double*, int, int); // Rf_rPsort unused
// rPsort used 63 times in 15 packages
void Rf_cPsort(Rcomplex*, int, int); // Rf_cPsort unused
// cPsort unused
void R_qsort (double *v, size_t i, size_t j); // R_qsort used 10 times in extWeibQuant, pomp, robustbase, dplR, tclust, pcaPP
void R_qsort_I (double *v, int *II, int i, int j); // R_qsort_I used 33 times in 15 packages
void R_qsort_int (int *iv, size_t i, size_t j); // R_qsort_int unused
void R_qsort_int_I(int *iv, int *II, int i, int j); // R_qsort_int_I used 19 times in ff, matrixStats, arules, Rborist, slam, eco, bnlearn
const char *R_ExpandFileName(const char *); // R_ExpandFileName used 42 times in 20 packages
void Rf_setIVector(int*, int, int); // Rf_setIVector unused
// setIVector unused
void Rf_setRVector(double*, int, double); // Rf_setRVector unused
// setRVector used 3 times in RcppClassic, RcppClassicExamples
Rboolean Rf_StringFalse(const char *); // Rf_StringFalse unused
// StringFalse used 3 times in iotools
Rboolean Rf_StringTrue(const char *); // Rf_StringTrue unused
// StringTrue used 3 times in iotools
Rboolean Rf_isBlankString(const char *); // Rf_isBlankString unused
// isBlankString used 1 times in iotools
double R_atof(const char *str); // R_atof used 9 times in SSN, tree, foreign, iotools
double R_strtod(const char *c, char **end); // R_strtod used 4 times in ape, iotools
char *R_tmpnam(const char *prefix, const char *tempdir); // R_tmpnam used 2 times in geometry
char *R_tmpnam2(const char *prefix, const char *tempdir, const char *fileext); // R_tmpnam2 unused
void R_CheckUserInterrupt(void); // R_CheckUserInterrupt used 1487 times in 234 packages
void R_CheckStack(void); // R_CheckStack used 115 times in vcrpart, actuar, cplm, lme4, Matrix, GNE, randtoolbox, HiPLARM, rngWELL, pedigreemm
void R_CheckStack2(size_t); // R_CheckStack2 unused
int findInterval(double *xt, int n, double x, // findInterval used 11 times in BSquare, DNAprofiles, unfoldr, chebpol, pomp, eco, protViz, PBSmapping, spatstat
Rboolean rightmost_closed, Rboolean all_inside, int ilo,
int *mflag);
void find_interv_vec(double *xt, int *n, double *x, int *nx, // find_interv_vec unused
int *rightmost_closed, int *all_inside, int *indx);
void R_max_col(double *matrix, int *nr, int *nc, int *maxes, int *ties_meth); // R_max_col used 2 times in geostatsp, MNP
}
# /Users/ls/Source/git/fastr/com.oracle.truffle.r.native/include/R_ext/eventloop.h
extern "C" {
typedef void (*InputHandlerProc)(void *userData);
typedef struct _InputHandler {
int activity;
int fileDescriptor;
InputHandlerProc handler;
struct _InputHandler *next;
int active;
void *userData;
} InputHandler; // InputHandler used 36 times in fdaPDE, httpuv, rgl, cairoDevice, setwidth, qtbase, RGtk2
extern InputHandler *initStdinHandler(void); // initStdinHandler unused
extern void consoleInputHandler(unsigned char *buf, int len); // consoleInputHandler unused
extern InputHandler *addInputHandler(InputHandler *handlers, int fd, InputHandlerProc handler, int activity); // addInputHandler used 10 times in httpuv, rgl, cairoDevice, Cairo, setwidth, rJava, qtbase, RGtk2
extern InputHandler *getInputHandler(InputHandler *handlers, int fd); // getInputHandler unused
extern int removeInputHandler(InputHandler **handlers, InputHandler *it); // removeInputHandler used 7 times in httpuv, rgl, cairoDevice, setwidth, qtbase, RGtk2
extern InputHandler *getSelectedHandler(InputHandler *handlers, fd_set *mask); // getSelectedHandler unused
extern fd_set *R_checkActivity(int usec, int ignore_stdin); // R_checkActivity used 3 times in audio, rJava, ROracle
extern fd_set *R_checkActivityEx(int usec, int ignore_stdin, void (*intr)(void));
extern void R_runHandlers(InputHandler *handlers, fd_set *mask); // R_runHandlers used 2 times in rJava
extern int R_SelectEx(int n, fd_set *readfds, fd_set *writefds,
fd_set *exceptfds, struct timeval *timeout,
void (*intr)(void));
extern InputHandler *R_InputHandlers;
extern void (* R_PolledEvents)(void);
extern int R_wait_usec; // R_wait_usec unused
}
# /Users/ls/Source/git/fastr/com.oracle.truffle.r.native/include/R_ext/stats_package.h
enum AlgType {NREG = 1, OPT = 2};
enum VPos {F = 9, F0 = 12, FDIF = 10, G = 27, HC = 70};
enum IVPos {AI = 90, AM = 94, ALGSAV = 50, COVMAT = 25,
COVPRT = 13, COVREQ = 14, DRADPR = 100,
DTYPE = 15, IERR = 74, INITH = 24, INITS = 24,
IPIVOT = 75, IVNEED = 2, LASTIV = 42, LASTV = 44,
LMAT = 41, MXFCAL = 16, MXITER = 17, NEXTV = 46,
NFCALL = 5, NFCOV = 51, NFGCAL = 6, NGCOV = 52,
NITER = 30, NVDFLT = 49, NVSAVE = 8, OUTLEV = 18,
PARPRT = 19, PARSAV = 48, PERM = 57, PRUNIT = 20,
QRTYP = 79, RDREQ = 56, RMAT = 77, SOLPRT = 21,
STATPR = 22, TOOBIG = 1, VNEED = 3, VSAVE = 59,
X0PRT = 23};
void
S_Rf_divset(int alg, int iv[], int liv, int lv, double v[]);
void
S_nlsb_iterate(double b[], double d[], double dr[], int iv[],
int liv, int lv, int n, int nd, int p,
double r[], double rd[], double v[], double x[]);
void
S_nlminb_iterate(double b[], double d[], double fx, double g[],
double h[], int iv[], int liv, int lv, int n,
double v[], double x[]);
static inline int S_v_length(int alg, int n)
{
return (alg - 1) ? (105 + (n * (2 * n + 20))) :
(130 + (n * (n + 27))/2);
}
static inline int S_iv_length(int alg, int n)
{
return (alg - 1) ? (82 + 4 * n) : (78 + 3 * n);
}
# /Users/ls/Source/git/fastr/com.oracle.truffle.r.native/include/R_ext/stats_stubs.h
void
S_Rf_divset(int alg, int iv[], int liv, int lv, double v[])
{
static void(*fun)(int,int[],int,int,double[]) = __null;
if (fun == __null)
fun = (void(*)(int,int[],int,int,double[]))
R_GetCCallable("stats", "Rf_divset");
fun(alg, iv, liv, lv, v);
}
void
S_nlminb_iterate(double b[], double d[], double fx, double g[], double h[],
int iv[], int liv, int lv, int n, double v[], double x[])
{
static void(*fun)(double[],double[],double,double[],double[],
int[],int,int,int,double[],double[]) = __null;
if (fun == __null)
fun = (void(*)(double[],double[],double,double[],double[],
int[],int,int,int,double[],double[]))
R_GetCCallable("stats", "nlminb_iterate");
fun(b, d, fx, g, h, iv, liv, lv, n, v, x);
}
void
S_nlsb_iterate(double b[], double d[], double dr[], int iv[], int liv,
int lv, int n, int nd, int p, double r[], double rd[],
double v[], double x[])
{
static void(*fun)(double[],double[],double[],int[],int,int,
int,int,int,double[],double[],double[],
double[]) = __null;
if (fun == __null)
fun = (void(*)(double[],double[],double[],int[],int,
int, int,int,int,double[],
double[],double[],double[]))
R_GetCCallable("stats", "nlsb_iterate");
fun(b, d, dr, iv, liv, lv, n, nd, p, r, rd, v, x);
}
# /Users/ls/Source/git/fastr/com.oracle.truffle.r.native/include/Rembedded.h
extern "C" {
extern int Rf_initEmbeddedR(int argc, char *argv[]);
extern void Rf_endEmbeddedR(int fatal); // Rf_endEmbeddedR used 4 times in RInside, Rhpc, rscproxy, littler
int Rf_initialize_R(int ac, char **av); // Rf_initialize_R used 3 times in Rserve, rJava
void setup_Rmainloop(void); // setup_Rmainloop used 6 times in Rserve, rJava, rscproxy
extern void R_ReplDLLinit(void); // R_ReplDLLinit used 7 times in RInside, Rhpc, rscproxy, Rserve, rJava, littler
extern int R_ReplDLLdo1(void); // R_ReplDLLdo1 used 3 times in Rserve, RInside, rJava
void R_setStartTime(void); // R_setStartTime unused
extern void R_RunExitFinalizers(void); // R_RunExitFinalizers used 4 times in RInside, TMB, rJava, littler
extern void CleanEd(void); // CleanEd used 1 times in rJava
extern void Rf_KillAllDevices(void); // Rf_KillAllDevices used 1 times in RInside
extern int R_DirtyImage; // R_DirtyImage used 1 times in rJava
extern void R_CleanTempDir(void); // R_CleanTempDir used 3 times in RInside, sprint, littler
extern char *R_TempDir;
extern void R_SaveGlobalEnv(void); // R_SaveGlobalEnv used 1 times in rJava
void fpu_setup(Rboolean start); // fpu_setup used 3 times in RInside, rJava, littler
}
# /Users/ls/Source/git/fastr/com.oracle.truffle.r.native/include/Rinterface.h
extern "C" {
extern Rboolean R_Interactive; // R_Interactive used 16 times in Rhpc, rscproxy, Rserve, RInside, yaml, rJava, littler
extern Rboolean R_Slave; // R_Slave used 3 times in Rserve, Rhpc
extern void R_RestoreGlobalEnv(void); // R_RestoreGlobalEnv unused
extern void R_RestoreGlobalEnvFromFile(const char *, Rboolean); // R_RestoreGlobalEnvFromFile unused
extern void R_SaveGlobalEnv(void); // R_SaveGlobalEnv used 1 times in rJava
extern void R_SaveGlobalEnvToFile(const char *); // R_SaveGlobalEnvToFile unused
extern void R_FlushConsole(void); // R_FlushConsole used 651 times in 78 packages
extern void R_ClearerrConsole(void); // R_ClearerrConsole used 2 times in gap, rJava
extern void R_Suicide(const char *); // R_Suicide unused
extern char *R_HomeDir(void); // R_HomeDir unused
extern int R_DirtyImage; // R_DirtyImage used 1 times in rJava
extern char *R_GUIType;
extern void R_setupHistory(void); // R_setupHistory unused
extern char *R_HistoryFile;
extern int R_HistorySize; // R_HistorySize used 2 times in rJava
extern int R_RestoreHistory; // R_RestoreHistory unused
extern char *R_Home;
void __attribute__((noreturn)) Rf_jump_to_toplevel(void);
void Rf_mainloop(void); // Rf_mainloop unused
// mainloop unused
void Rf_onintr(void); // Rf_onintr used 216 times in 12 packages
// onintr used 1 times in rJava
extern void* R_GlobalContext;
void process_site_Renviron(void); // process_site_Renviron unused
void process_system_Renviron(void); // process_system_Renviron unused
void process_user_Renviron(void); // process_user_Renviron unused
extern FILE * R_Consolefile;
extern FILE * R_Outputfile;
void R_setStartTime(void); // R_setStartTime unused
void fpu_setup(Rboolean); // fpu_setup used 3 times in RInside, rJava, littler
extern int R_running_as_main_program; // R_running_as_main_program unused
extern int R_SignalHandlers; // R_SignalHandlers used 5 times in RInside, Rserve, rJava, littler
}
# /Users/ls/Source/git/fastr/com.oracle.truffle.r.native/include/Rinternals.h
extern "C" {
typedef unsigned char Rbyte;
typedef int R_len_t; // R_len_t used 2397 times in 70 packages
typedef ptrdiff_t R_xlen_t; // R_xlen_t used 1537 times in 32 packages
typedef struct { R_xlen_t lv_length, lv_truelength; } R_long_vec_hdr_t;
typedef unsigned int SEXPTYPE;
struct sxpinfo_struct {
SEXPTYPE type : 5;
unsigned int obj : 1;
unsigned int named : 2;
unsigned int gp : 16;
unsigned int mark : 1;
unsigned int debug : 1;
unsigned int trace : 1;
unsigned int spare : 1;
unsigned int gcgen : 1;
unsigned int gccls : 3;
};
struct vecsxp_struct {
R_len_t length;
R_len_t truelength;
};
struct primsxp_struct {
int offset;
};
struct symsxp_struct {
struct SEXPREC *pname;
struct SEXPREC *value;
struct SEXPREC *internal;
};
struct listsxp_struct {
struct SEXPREC *carval;
struct SEXPREC *cdrval;
struct SEXPREC *tagval;
};
struct envsxp_struct {
struct SEXPREC *frame;
struct SEXPREC *enclos;
struct SEXPREC *hashtab;
};
struct closxp_struct {
struct SEXPREC *formals;
struct SEXPREC *body;
struct SEXPREC *env;
};
struct promsxp_struct {
struct SEXPREC *value;
struct SEXPREC *expr;
struct SEXPREC *env;
};
typedef struct SEXPREC {
struct sxpinfo_struct sxpinfo; struct SEXPREC *attrib; struct SEXPREC *gengc_next_node, *gengc_prev_node;
union {
struct primsxp_struct primsxp;
struct symsxp_struct symsxp;
struct listsxp_struct listsxp;
struct envsxp_struct envsxp;
struct closxp_struct closxp;
struct promsxp_struct promsxp;
} u; // u unused
} SEXPREC, *SEXP;
typedef struct VECTOR_SEXPREC {
struct sxpinfo_struct sxpinfo; struct SEXPREC *attrib; struct SEXPREC *gengc_next_node, *gengc_prev_node;
struct vecsxp_struct vecsxp;
} VECTOR_SEXPREC, *VECSEXP;
typedef union { VECTOR_SEXPREC s; double align; } SEXPREC_ALIGN;
R_len_t __attribute__((noreturn)) R_BadLongVector(SEXP, const char *, int);
SEXP (ATTRIB)(SEXP x); // ATTRIB used 83 times in 20 packages
int (OBJECT)(SEXP x); // OBJECT used 102 times in 28 packages
int (MARK)(SEXP x); // MARK used 251 times in 21 packages
int (TYPEOF)(SEXP x); // TYPEOF used 2832 times in 195 packages
int (NAMED)(SEXP x); // NAMED used 62 times in 22 packages
int (REFCNT)(SEXP x); // REFCNT unused
void (SET_OBJECT)(SEXP x, int v); // SET_OBJECT used 32 times in RSclient, reshape2, Rserve, data.table, actuar, dplyr, proxy, rmongodb, slam, tau
void (SET_TYPEOF)(SEXP x, int v); // SET_TYPEOF used 38 times in 21 packages
void (SET_NAMED)(SEXP x, int v); // SET_NAMED used 10 times in dplyr, yaml, data.table, iotools, RSQLite
void SET_ATTRIB(SEXP x, SEXP v); // SET_ATTRIB used 54 times in 18 packages
void DUPLICATE_ATTRIB(SEXP to, SEXP from); // DUPLICATE_ATTRIB used 5 times in covr, lfe, testthat, data.table
int (IS_S4_OBJECT)(SEXP x); // IS_S4_OBJECT used 23 times in Rmosek, Runuran, data.table, xts, Matrix, slam, zoo, HiPLARM, OpenMx, tau
void (SET_S4_OBJECT)(SEXP x); // SET_S4_OBJECT used 12 times in RSclient, redland, Rserve, data.table, FREGAT, rJPSGCS, tau
void (UNSET_S4_OBJECT)(SEXP x); // UNSET_S4_OBJECT used 2 times in data.table, slam
int (LENGTH)(SEXP x); // LENGTH used 5845 times in 356 packages
int (TRUELENGTH)(SEXP x); // TRUELENGTH used 37 times in data.table
void (SETLENGTH)(SEXP x, int v); // SETLENGTH used 65 times in 11 packages
void (SET_TRUELENGTH)(SEXP x, int v); // SET_TRUELENGTH used 26 times in data.table
R_xlen_t (XLENGTH)(SEXP x); // XLENGTH used 287 times in 21 packages
R_xlen_t (XTRUELENGTH)(SEXP x); // XTRUELENGTH unused
int (IS_LONG_VEC)(SEXP x); // IS_LONG_VEC used 1 times in RProtoBuf
int (LEVELS)(SEXP x); // LEVELS used 18 times in rtdists, rPref, BsMD, data.table, stringi, dplyr, OBsMD, pbdZMQ, astrochron, RandomFields
int (SETLEVELS)(SEXP x, int v); // SETLEVELS used 2 times in Rcpp11
int *(LOGICAL)(SEXP x); // LOGICAL used 4473 times in 288 packages
int *(INTEGER)(SEXP x); // INTEGER used 41659 times in 758 packages
Rbyte *(RAW)(SEXP x); // RAW used 880 times in 99 packages
double *(REAL)(SEXP x); // REAL used 30947 times in 687 packages
Rcomplex *(COMPLEX)(SEXP x); // COMPLEX used 1697 times in 71 packages
SEXP (STRING_ELT)(SEXP x, R_xlen_t i); // STRING_ELT used 4143 times in 333 packages
SEXP (VECTOR_ELT)(SEXP x, R_xlen_t i); // VECTOR_ELT used 8626 times in 291 packages
void SET_STRING_ELT(SEXP x, R_xlen_t i, SEXP v); // SET_STRING_ELT used 5834 times in 321 packages
SEXP SET_VECTOR_ELT(SEXP x, R_xlen_t i, SEXP v); // SET_VECTOR_ELT used 9751 times in 391 packages
SEXP *(STRING_PTR)(SEXP x); // STRING_PTR used 65 times in 14 packages
SEXP * __attribute__((noreturn)) (VECTOR_PTR)(SEXP x);
SEXP (TAG)(SEXP e); // TAG used 513 times in 40 packages
SEXP (CAR)(SEXP e); // CAR used 575 times in 63 packages
SEXP (CDR)(SEXP e); // CDR used 4523 times in 76 packages
SEXP (CAAR)(SEXP e); // CAAR unused
SEXP (CDAR)(SEXP e); // CDAR unused
SEXP (CADR)(SEXP e); // CADR used 104 times in 17 packages
SEXP (CDDR)(SEXP e); // CDDR used 52 times in Rlabkey, Rcpp11, dplyr, proxy, Rcpp, slam, tikzDevice, OpenCL, svd
SEXP (CDDDR)(SEXP e); // CDDDR unused
SEXP (CADDR)(SEXP e); // CADDR used 52 times in 11 packages
SEXP (CADDDR)(SEXP e); // CADDDR used 21 times in RPostgreSQL, foreign, actuar, bibtex
SEXP (CAD4R)(SEXP e); // CAD4R used 14 times in earth, foreign, actuar
int (MISSING)(SEXP x); // MISSING used 125 times in 25 packages
void (SET_MISSING)(SEXP x, int v); // SET_MISSING used 1 times in sprint
void SET_TAG(SEXP x, SEXP y); // SET_TAG used 200 times in 34 packages
SEXP SETCAR(SEXP x, SEXP y); // SETCAR used 4072 times in 47 packages
SEXP SETCDR(SEXP x, SEXP y); // SETCDR used 46 times in 14 packages
SEXP SETCADR(SEXP x, SEXP y); // SETCADR used 112 times in 37 packages
SEXP SETCADDR(SEXP x, SEXP y); // SETCADDR used 45 times in 14 packages
SEXP SETCADDDR(SEXP x, SEXP y); // SETCADDDR used 31 times in 12 packages
SEXP SETCAD4R(SEXP e, SEXP y); // SETCAD4R used 15 times in kergp, Sim.DiffProc, tikzDevice
SEXP CONS_NR(SEXP a, SEXP b); // CONS_NR unused
SEXP (FORMALS)(SEXP x); // FORMALS used 15 times in qtpaint, RSclient, PBSddesolve, Rserve, covr, pryr, rgp, testthat, RandomFields
SEXP (BODY)(SEXP x); // BODY used 48 times in 15 packages
SEXP (CLOENV)(SEXP x); // CLOENV used 23 times in Rcpp11, covr, pomp, Rcpp, pryr, testthat, qtbase
int (RDEBUG)(SEXP x); // RDEBUG used 69 times in rmetasim
int (RSTEP)(SEXP x); // RSTEP unused
int (RTRACE)(SEXP x); // RTRACE unused
void (SET_RDEBUG)(SEXP x, int v); // SET_RDEBUG unused
void (SET_RSTEP)(SEXP x, int v); // SET_RSTEP unused
void (SET_RTRACE)(SEXP x, int v); // SET_RTRACE unused
void SET_FORMALS(SEXP x, SEXP v); // SET_FORMALS used 5 times in covr, rgp, testthat, qtbase
void SET_BODY(SEXP x, SEXP v); // SET_BODY used 6 times in covr, rgp, testthat, qtbase
void SET_CLOENV(SEXP x, SEXP v); // SET_CLOENV used 6 times in covr, rgp, testthat, qtbase
SEXP (PRINTNAME)(SEXP x); // PRINTNAME used 92 times in 29 packages
SEXP (SYMVALUE)(SEXP x); // SYMVALUE unused
SEXP (INTERNAL)(SEXP x); // INTERNAL used 1014 times in 63 packages
int (DDVAL)(SEXP x); // DDVAL unused
void (SET_DDVAL)(SEXP x, int v); // SET_DDVAL unused
void SET_PRINTNAME(SEXP x, SEXP v); // SET_PRINTNAME unused
void SET_SYMVALUE(SEXP x, SEXP v); // SET_SYMVALUE unused
void SET_INTERNAL(SEXP x, SEXP v); // SET_INTERNAL unused
SEXP (FRAME)(SEXP x); // FRAME used 19 times in deTestSet, IRISSeismic, pryr, BayesBridge, datamap, BayesLogit
SEXP (ENCLOS)(SEXP x); // ENCLOS used 7 times in Rcpp, pryr, rJava, Rcpp11, RGtk2
SEXP (HASHTAB)(SEXP x); // HASHTAB used 12 times in Rcpp, pryr, datamap, Rcpp11, qtbase
int (ENVFLAGS)(SEXP x); // ENVFLAGS unused
void (SET_ENVFLAGS)(SEXP x, int v); // SET_ENVFLAGS unused
void SET_FRAME(SEXP x, SEXP v); // SET_FRAME used 4 times in rgp, mmap, qtbase
void SET_ENCLOS(SEXP x, SEXP v); // SET_ENCLOS used 7 times in rgp, RandomFields, mmap, qtbase
void SET_HASHTAB(SEXP x, SEXP v); // SET_HASHTAB used 5 times in rgp, mmap, qtbase
SEXP (PRCODE)(SEXP x); // PRCODE used 15 times in dplyr, Rcpp, pryr, Rcpp11
SEXP (PRENV)(SEXP x); // PRENV used 14 times in igraph, dplyr, Rcpp, pryr, Rcpp11, lazyeval
SEXP (PRVALUE)(SEXP x); // PRVALUE used 12 times in dplyr, Rcpp, pryr, Rcpp11
int (PRSEEN)(SEXP x); // PRSEEN used 4 times in Rcpp, Rcpp11
void (SET_PRSEEN)(SEXP x, int v); // SET_PRSEEN unused
void SET_PRENV(SEXP x, SEXP v); // SET_PRENV unused
void SET_PRVALUE(SEXP x, SEXP v); // SET_PRVALUE unused
void SET_PRCODE(SEXP x, SEXP v); // SET_PRCODE unused
void SET_PRSEEN(SEXP x, int v); // SET_PRSEEN unused
int (HASHASH)(SEXP x); // HASHASH unused
int (HASHVALUE)(SEXP x); // HASHVALUE unused
void (SET_HASHASH)(SEXP x, int v); // SET_HASHASH unused
void (SET_HASHVALUE)(SEXP x, int v); // SET_HASHVALUE unused
typedef int PROTECT_INDEX; // PROTECT_INDEX used 94 times in 27 packages
extern SEXP R_GlobalEnv; // R_GlobalEnv used 1400 times in 79 packages
extern SEXP R_EmptyEnv; // R_EmptyEnv used 16 times in Rserve, dplR, Rcpp11, Rcpp, RcppClassic, pryr, rJava, adaptivetau, qtbase
extern SEXP R_BaseEnv; // R_BaseEnv used 27 times in 15 packages
extern SEXP R_BaseNamespace; // R_BaseNamespace used 3 times in Rcpp, Rcpp11
extern SEXP R_NamespaceRegistry; // R_NamespaceRegistry used 3 times in devtools, namespace, Rcpp
extern SEXP R_Srcref; // R_Srcref unused
extern SEXP R_NilValue; // R_NilValue used 10178 times in 491 packages
// NULL_USER_OBJECT used 8268 times in rggobi, XML, rjson, bigmemory, dbarts, lazy, RGtk2
extern SEXP R_UnboundValue; // R_UnboundValue used 73 times in 23 packages
extern SEXP R_MissingArg; // R_MissingArg used 21 times in 12 packages
extern
SEXP R_RestartToken; // R_RestartToken unused
extern SEXP R_baseSymbol; // R_baseSymbol unused
extern SEXP R_BaseSymbol; // R_BaseSymbol unused
extern SEXP R_BraceSymbol; // R_BraceSymbol unused
extern SEXP R_Bracket2Symbol; // R_Bracket2Symbol used 4 times in purrr
extern SEXP R_BracketSymbol; // R_BracketSymbol unused
extern SEXP R_ClassSymbol; // R_ClassSymbol used 311 times in 84 packages
extern SEXP R_DeviceSymbol; // R_DeviceSymbol unused
extern SEXP R_DimNamesSymbol; // R_DimNamesSymbol used 230 times in 51 packages
extern SEXP R_DimSymbol; // R_DimSymbol used 1015 times in 170 packages
extern SEXP R_DollarSymbol; // R_DollarSymbol used 6 times in dplyr, Rcpp, Rcpp11
extern SEXP R_DotsSymbol; // R_DotsSymbol used 13 times in RPostgreSQL, RcppDE, lbfgs, purrr, RMySQL, DEoptim, qtbase
extern SEXP R_DoubleColonSymbol; // R_DoubleColonSymbol unused
extern SEXP R_DropSymbol; // R_DropSymbol unused
extern SEXP R_LastvalueSymbol; // R_LastvalueSymbol unused
extern SEXP R_LevelsSymbol; // R_LevelsSymbol used 51 times in 17 packages
extern SEXP R_ModeSymbol; // R_ModeSymbol unused
extern SEXP R_NaRmSymbol; // R_NaRmSymbol used 2 times in dplyr
extern SEXP R_NameSymbol; // R_NameSymbol used 2 times in qtbase
extern SEXP R_NamesSymbol; // R_NamesSymbol used 1373 times in 249 packages
extern SEXP R_NamespaceEnvSymbol; // R_NamespaceEnvSymbol unused
extern SEXP R_PackageSymbol; // R_PackageSymbol used 2 times in Rmosek, HiPLARM
extern SEXP R_PreviousSymbol; // R_PreviousSymbol unused
extern SEXP R_QuoteSymbol; // R_QuoteSymbol unused
extern SEXP R_RowNamesSymbol; // R_RowNamesSymbol used 97 times in 37 packages
extern SEXP R_SeedsSymbol; // R_SeedsSymbol used 2 times in treatSens
extern SEXP R_SortListSymbol; // R_SortListSymbol unused
extern SEXP R_SourceSymbol; // R_SourceSymbol unused
extern SEXP R_SpecSymbol; // R_SpecSymbol unused
extern SEXP R_TripleColonSymbol; // R_TripleColonSymbol unused
extern SEXP R_TspSymbol; // R_TspSymbol unused
extern SEXP R_dot_defined; // R_dot_defined unused
extern SEXP R_dot_Method; // R_dot_Method unused
extern SEXP R_dot_packageName; // R_dot_packageName unused
extern SEXP R_dot_target; // R_dot_target unused
extern SEXP R_NaString; // R_NaString used 36 times in stringdist, RCurl, RSclient, uniqueAtomMat, XML, Rserve, Rblpapi, SoundexBR, rJava, iotools
// NA_STRING used 574 times in 90 packages
extern SEXP R_BlankString; // R_BlankString used 39 times in 13 packages
extern SEXP R_BlankScalarString; // R_BlankScalarString unused
SEXP R_GetCurrentSrcref(int); // R_GetCurrentSrcref unused
SEXP R_GetSrcFilename(SEXP); // R_GetSrcFilename unused
SEXP Rf_asChar(SEXP); // Rf_asChar used 246 times in 16 packages
// asChar used 194 times in 36 packages
SEXP Rf_coerceVector(SEXP, SEXPTYPE); // Rf_coerceVector used 44 times in 13 packages
// coerceVector used 2585 times in 167 packages
SEXP Rf_PairToVectorList(SEXP x); // Rf_PairToVectorList unused
// PairToVectorList used 7 times in cba, rcdd
SEXP Rf_VectorToPairList(SEXP x); // Rf_VectorToPairList unused
// VectorToPairList used 13 times in pomp, arules
SEXP Rf_asCharacterFactor(SEXP x); // Rf_asCharacterFactor used 3 times in tidyr, reshape2, RSQLite
// asCharacterFactor used 11 times in fastmatch, Kmisc, data.table
int Rf_asLogical(SEXP x); // Rf_asLogical used 45 times in 11 packages
// asLogical used 462 times in 64 packages
int Rf_asInteger(SEXP x); // Rf_asInteger used 746 times in 23 packages
// asInteger used 1277 times in 140 packages
double Rf_asReal(SEXP x); // Rf_asReal used 113 times in 17 packages
// asReal used 383 times in 83 packages
Rcomplex Rf_asComplex(SEXP x); // Rf_asComplex unused
// asComplex used 1 times in ff
typedef struct R_allocator R_allocator_t;
char * Rf_acopy_string(const char *); // Rf_acopy_string unused
// acopy_string used 10 times in splusTimeDate
void Rf_addMissingVarsToNewEnv(SEXP, SEXP); // Rf_addMissingVarsToNewEnv unused
// addMissingVarsToNewEnv unused
SEXP Rf_alloc3DArray(SEXPTYPE, int, int, int); // Rf_alloc3DArray unused
// alloc3DArray used 21 times in mcmc, msm, TPmsm, unfoldr, RandomFields, cplm
SEXP Rf_allocArray(SEXPTYPE, SEXP); // Rf_allocArray used 4 times in h5
// allocArray used 24 times in unfoldr, kergp, pomp, proxy, kza, slam, mvMORPH, TPmsm, ouch, RandomFields
SEXP Rf_allocFormalsList2(SEXP sym1, SEXP sym2); // Rf_allocFormalsList2 unused
// allocFormalsList2 unused
SEXP Rf_allocFormalsList3(SEXP sym1, SEXP sym2, SEXP sym3); // Rf_allocFormalsList3 unused
// allocFormalsList3 unused
SEXP Rf_allocFormalsList4(SEXP sym1, SEXP sym2, SEXP sym3, SEXP sym4); // Rf_allocFormalsList4 unused
// allocFormalsList4 unused
SEXP Rf_allocFormalsList5(SEXP sym1, SEXP sym2, SEXP sym3, SEXP sym4, SEXP sym5); // Rf_allocFormalsList5 unused
// allocFormalsList5 unused
SEXP Rf_allocFormalsList6(SEXP sym1, SEXP sym2, SEXP sym3, SEXP sym4, SEXP sym5, SEXP sym6); // Rf_allocFormalsList6 unused
// allocFormalsList6 unused
SEXP Rf_allocMatrix(SEXPTYPE, int, int); // Rf_allocMatrix used 122 times in 14 packages
// allocMatrix used 1577 times in 244 packages
SEXP Rf_allocList(int); // Rf_allocList unused
// allocList used 60 times in 25 packages
SEXP Rf_allocS4Object(void); // Rf_allocS4Object used 2 times in Rserve, RSclient
// allocS4Object used 1 times in arules
SEXP Rf_allocSExp(SEXPTYPE); // Rf_allocSExp unused
// allocSExp used 14 times in igraph, rgp, data.table, RandomFields, mmap, qtbase
SEXP Rf_allocVector3(SEXPTYPE, R_xlen_t, R_allocator_t*); // Rf_allocVector3 unused
// allocVector3 unused
R_xlen_t Rf_any_duplicated(SEXP x, Rboolean from_last); // Rf_any_duplicated unused
// any_duplicated used 5 times in data.table, checkmate
R_xlen_t Rf_any_duplicated3(SEXP x, SEXP incomp, Rboolean from_last); // Rf_any_duplicated3 unused
// any_duplicated3 unused
SEXP Rf_applyClosure(SEXP, SEXP, SEXP, SEXP, SEXP); // Rf_applyClosure unused
// applyClosure unused
SEXP Rf_arraySubscript(int, SEXP, SEXP, SEXP (*)(SEXP,SEXP),
SEXP (*)(SEXP, int), SEXP);
SEXP Rf_classgets(SEXP, SEXP); // Rf_classgets used 27 times in fts, clpAPI, cplexAPI, sybilSBML, Rblpapi, glpkAPI
// classgets used 91 times in 30 packages
SEXP Rf_cons(SEXP, SEXP); // Rf_cons used 39 times in dplyr, Rcpp, Rcpp11
// cons used 609 times in 39 packages
void Rf_copyMatrix(SEXP, SEXP, Rboolean); // Rf_copyMatrix used 8 times in CNVassoc
// copyMatrix used 7 times in BDgraph, Matrix, kza
void Rf_copyListMatrix(SEXP, SEXP, Rboolean); // Rf_copyListMatrix unused
// copyListMatrix used 1 times in Matrix
void Rf_copyMostAttrib(SEXP, SEXP); // Rf_copyMostAttrib used 8 times in tidyr, purrr, Rcpp, reshape2
// copyMostAttrib used 68 times in arules, robustbase, data.table, xts, memisc, proxy, zoo, tau
void Rf_copyVector(SEXP, SEXP); // Rf_copyVector unused
// copyVector used 12 times in tm, kza, mlegp, adaptivetau
int Rf_countContexts(int, int); // Rf_countContexts unused
// countContexts unused
SEXP Rf_CreateTag(SEXP); // Rf_CreateTag unused
// CreateTag used 1 times in rgp
void Rf_defineVar(SEXP, SEXP, SEXP); // Rf_defineVar used 7 times in purrr, Rcpp, Rserve, Rcpp11
// defineVar used 218 times in 38 packages
SEXP Rf_dimgets(SEXP, SEXP); // Rf_dimgets unused
// dimgets used 3 times in CorrBin
SEXP Rf_dimnamesgets(SEXP, SEXP); // Rf_dimnamesgets unused
// dimnamesgets used 24 times in Matrix, RxCEcolInf, lxb, sapa
SEXP Rf_DropDims(SEXP); // Rf_DropDims unused
// DropDims unused
SEXP Rf_duplicate(SEXP); // Rf_duplicate used 21 times in XML, data.table, Rcpp11, lme4, dplyr, Rcpp, RcppClassic, grr, NMF, copula
// duplicate used 2088 times in 224 packages
SEXP Rf_shallow_duplicate(SEXP); // Rf_shallow_duplicate unused
// shallow_duplicate used 2 times in tmlenet, smint
SEXP Rf_lazy_duplicate(SEXP); // Rf_lazy_duplicate unused
// lazy_duplicate unused
SEXP Rf_duplicated(SEXP, Rboolean); // Rf_duplicated unused
// duplicated used 402 times in 100 packages
Rboolean R_envHasNoSpecialSymbols(SEXP); // R_envHasNoSpecialSymbols unused
SEXP Rf_eval(SEXP, SEXP); // Rf_eval used 105 times in 24 packages
// eval used 25178 times in 269 packages
SEXP Rf_findFun(SEXP, SEXP); // Rf_findFun used 7 times in Rcpp, Rcpp11, littler, RGtk2
// findFun used 13 times in sprint, tikzDevice, yaml, unfoldr, TraMineR, RGtk2
SEXP Rf_findVar(SEXP, SEXP); // Rf_findVar used 19 times in R2SWF, Rcpp11, dplyr, Rcpp, pryr, rJava, littler, showtext
// findVar used 1333 times in 24 packages
SEXP Rf_findVarInFrame(SEXP, SEXP); // Rf_findVarInFrame used 7 times in RCurl, Rcpp, Rcpp11
// findVarInFrame used 101 times in 13 packages
SEXP Rf_findVarInFrame3(SEXP, SEXP, Rboolean); // Rf_findVarInFrame3 used 1 times in pryr
// findVarInFrame3 used 5 times in datamap
SEXP Rf_getAttrib(SEXP, SEXP); // Rf_getAttrib used 256 times in 36 packages
// getAttrib used 1930 times in 239 packages
SEXP Rf_GetArrayDimnames(SEXP); // Rf_GetArrayDimnames unused
// GetArrayDimnames unused
SEXP Rf_GetColNames(SEXP); // Rf_GetColNames unused
// GetColNames unused
void Rf_GetMatrixDimnames(SEXP, SEXP*, SEXP*, const char**, const char**); // Rf_GetMatrixDimnames unused
// GetMatrixDimnames used 2 times in Kmisc, optmatch
SEXP Rf_GetOption(SEXP, SEXP); // Rf_GetOption unused
// GetOption used 5 times in rgl, gmp, Cairo, RGtk2
SEXP Rf_GetOption1(SEXP); // Rf_GetOption1 used 5 times in RProtoBuf, gmp
// GetOption1 used 1 times in PCICt
int Rf_GetOptionDigits(void); // Rf_GetOptionDigits unused
// GetOptionDigits unused
int Rf_GetOptionWidth(void); // Rf_GetOptionWidth used 1 times in progress
// GetOptionWidth unused
SEXP Rf_GetRowNames(SEXP); // Rf_GetRowNames unused
// GetRowNames unused
void Rf_gsetVar(SEXP, SEXP, SEXP); // Rf_gsetVar unused
// gsetVar used 4 times in RSVGTipsDevice, Cairo, RSvgDevice, JavaGD
SEXP Rf_install(const char *); // Rf_install used 990 times in 50 packages
// install used 3178 times in 224 packages
SEXP Rf_installChar(SEXP); // Rf_installChar used 15 times in dplyr, Rcpp
// installChar used 4 times in dplyr
SEXP Rf_installDDVAL(int i); // Rf_installDDVAL unused
// installDDVAL unused
SEXP Rf_installS3Signature(const char *, const char *); // Rf_installS3Signature unused
// installS3Signature unused
Rboolean Rf_isFree(SEXP); // Rf_isFree unused
// isFree unused
Rboolean Rf_isOrdered(SEXP); // Rf_isOrdered unused
// isOrdered used 65 times in partykit, PythonInR, data.table, RSQLite
Rboolean Rf_isUnordered(SEXP); // Rf_isUnordered used 1 times in OpenMx
// isUnordered used 2 times in PythonInR
Rboolean Rf_isUnsorted(SEXP, Rboolean); // Rf_isUnsorted unused
// isUnsorted unused
SEXP Rf_lengthgets(SEXP, R_len_t); // Rf_lengthgets used 7 times in readxl, readr
// lengthgets used 47 times in 11 packages
SEXP Rf_xlengthgets(SEXP, R_xlen_t); // Rf_xlengthgets unused
// xlengthgets unused
SEXP R_lsInternal(SEXP, Rboolean); // R_lsInternal used 5 times in Rcpp, rJava, Rcpp11, qtbase
SEXP R_lsInternal3(SEXP, Rboolean, Rboolean); // R_lsInternal3 unused
SEXP Rf_match(SEXP, SEXP, int); // Rf_match used 2 times in Rvcg
// match used 8773 times in 388 packages
SEXP Rf_matchE(SEXP, SEXP, int, SEXP); // Rf_matchE unused
// matchE unused
SEXP Rf_namesgets(SEXP, SEXP); // Rf_namesgets used 4 times in OpenMx, rpf
// namesgets used 80 times in 14 packages
SEXP Rf_mkChar(const char *); // Rf_mkChar used 517 times in 32 packages
// mkChar used 4545 times in 287 packages
SEXP Rf_mkCharLen(const char *, int); // Rf_mkCharLen used 21 times in refGenome, redland, Rcpp11, stringi, Kmisc, Rcpp, sourcetools, iotools
// mkCharLen used 38 times in 16 packages
Rboolean Rf_NonNullStringMatch(SEXP, SEXP); // Rf_NonNullStringMatch unused
// NonNullStringMatch used 8 times in proxy, arules, arulesSequences, cba
int Rf_ncols(SEXP); // Rf_ncols used 22 times in fdaPDE, fts, BoomSpikeSlab, Rmosek, ccgarch, rcppbugs, biganalytics, CEC, OpenMx, RTriangle
// ncols used 3805 times in 182 packages
int Rf_nrows(SEXP); // Rf_nrows used 32 times in 12 packages
// nrows used 4332 times in 215 packages
SEXP Rf_nthcdr(SEXP, int); // Rf_nthcdr unused
// nthcdr used 9 times in sprint, rmongodb, PythonInR, xts
typedef enum {Bytes, Chars, Width} nchar_type;
int R_nchar(SEXP string, nchar_type type_, // R_nchar unused
Rboolean allowNA, Rboolean keepNA, const char* msg_name);
Rboolean Rf_pmatch(SEXP, SEXP, Rboolean); // Rf_pmatch unused
// pmatch used 169 times in ore, git2r, AdaptFitOS, data.table, seqminer, locfit, oce, rmumps
Rboolean Rf_psmatch(const char *, const char *, Rboolean); // Rf_psmatch unused
// psmatch used 5 times in rgl
void Rf_PrintValue(SEXP); // Rf_PrintValue used 95 times in 19 packages
// PrintValue used 119 times in 13 packages
void Rf_readS3VarsFromFrame(SEXP, SEXP*, SEXP*, SEXP*, SEXP*, SEXP*, SEXP*); // Rf_readS3VarsFromFrame unused
// readS3VarsFromFrame unused
SEXP Rf_setAttrib(SEXP, SEXP, SEXP); // Rf_setAttrib used 325 times in 35 packages
// setAttrib used 1830 times in 251 packages
void Rf_setSVector(SEXP*, int, SEXP); // Rf_setSVector unused
// setSVector unused
void Rf_setVar(SEXP, SEXP, SEXP); // Rf_setVar used 1 times in showtext
// setVar used 24 times in Rhpc, rscproxy, PythonInR, rgenoud, survival, gsl, littler, spatstat
SEXP Rf_stringSuffix(SEXP, int); // Rf_stringSuffix unused
// stringSuffix unused
SEXPTYPE Rf_str2type(const char *); // Rf_str2type used 4 times in purrr
// str2type used 1 times in RGtk2
Rboolean Rf_StringBlank(SEXP); // Rf_StringBlank used 1 times in LCMCR
// StringBlank unused
SEXP Rf_substitute(SEXP,SEXP); // Rf_substitute unused
// substitute used 255 times in 56 packages
const char * Rf_translateChar(SEXP); // Rf_translateChar used 1 times in devEMF
// translateChar used 59 times in 19 packages
const char * Rf_translateChar0(SEXP); // Rf_translateChar0 unused
// translateChar0 unused
const char * Rf_translateCharUTF8(SEXP); // Rf_translateCharUTF8 used 22 times in Rserve, xml2, readr, gdtools, Rcpp11, dplyr, Rcpp, haven
// translateCharUTF8 used 66 times in 13 packages
const char * Rf_type2char(SEXPTYPE); // Rf_type2char used 33 times in 13 packages
// type2char used 107 times in 12 packages
SEXP Rf_type2rstr(SEXPTYPE); // Rf_type2rstr unused
// type2rstr unused
SEXP Rf_type2str(SEXPTYPE); // Rf_type2str used 4 times in Rcpp, pryr
// type2str used 3 times in Kmisc, yaml
SEXP Rf_type2str_nowarn(SEXPTYPE); // Rf_type2str_nowarn unused
// type2str_nowarn used 1 times in qrmtools
void Rf_unprotect_ptr(SEXP); // Rf_unprotect_ptr unused
// unprotect_ptr unused
void __attribute__((noreturn)) R_signal_protect_error(void);
void __attribute__((noreturn)) R_signal_unprotect_error(void);
void __attribute__((noreturn)) R_signal_reprotect_error(PROTECT_INDEX i);
SEXP R_tryEval(SEXP, SEXP, int *); // R_tryEval used 1118 times in 24 packages
SEXP R_tryEvalSilent(SEXP, SEXP, int *); // R_tryEvalSilent unused
const char *R_curErrorBuf(); // R_curErrorBuf used 4 times in Rhpc, Rcpp11
Rboolean Rf_isS4(SEXP); // Rf_isS4 used 16 times in Rcpp, Rcpp11
// isS4 used 13 times in PythonInR, Rcpp11, dplyr, Rcpp, catnet, rmumps, sdnet
SEXP Rf_asS4(SEXP, Rboolean, int); // Rf_asS4 unused
// asS4 unused
SEXP Rf_S3Class(SEXP); // Rf_S3Class unused
// S3Class used 4 times in RInside, littler
int Rf_isBasicClass(const char *); // Rf_isBasicClass unused
// isBasicClass unused
Rboolean R_cycle_detected(SEXP s, SEXP child); // R_cycle_detected unused
typedef enum {
CE_NATIVE = 0,
CE_UTF8 = 1,
CE_LATIN1 = 2,
CE_BYTES = 3,
CE_SYMBOL = 5,
CE_ANY =99
} cetype_t; // cetype_t used 47 times in 13 packages
cetype_t Rf_getCharCE(SEXP); // Rf_getCharCE used 13 times in RSclient, Rserve, genie, dplyr, Rcpp, rJava, ROracle
// getCharCE used 16 times in ore, RSclient, PythonInR, Rserve, jsonlite, tau, rJava
SEXP Rf_mkCharCE(const char *, cetype_t); // Rf_mkCharCE used 40 times in readxl, mongolite, xml2, readr, Rcpp11, stringi, commonmark, dplyr, Rcpp, haven
// mkCharCE used 72 times in 15 packages
SEXP Rf_mkCharLenCE(const char *, int, cetype_t); // Rf_mkCharLenCE used 68 times in readr, ROracle, stringi
// mkCharLenCE used 23 times in 11 packages
const char *Rf_reEnc(const char *x, cetype_t ce_in, cetype_t ce_out, int subst); // Rf_reEnc used 5 times in RCurl, RSclient, Rserve, rJava
// reEnc used 3 times in PythonInR, RJSONIO
SEXP R_forceAndCall(SEXP e, int n, SEXP rho); // R_forceAndCall unused
SEXP R_MakeExternalPtr(void *p, SEXP tag, SEXP prot); // R_MakeExternalPtr used 321 times in 102 packages
void *R_ExternalPtrAddr(SEXP s); // R_ExternalPtrAddr used 2127 times in 115 packages
SEXP R_ExternalPtrTag(SEXP s); // R_ExternalPtrTag used 195 times in 32 packages
SEXP R_ExternalPtrProtected(SEXP s); // R_ExternalPtrProtected used 6 times in PopGenome, Rcpp, WhopGenome, data.table, Rcpp11
void R_ClearExternalPtr(SEXP s); // R_ClearExternalPtr used 157 times in 64 packages
void R_SetExternalPtrAddr(SEXP s, void *p); // R_SetExternalPtrAddr used 23 times in ff, PopGenome, RCurl, rstream, Rlabkey, WhopGenome, XML, RJSONIO, memisc, ROracle
void R_SetExternalPtrTag(SEXP s, SEXP tag); // R_SetExternalPtrTag used 16 times in PopGenome, rstream, Rlabkey, WhopGenome, Rcpp11, Rcpp, rLindo
void R_SetExternalPtrProtected(SEXP s, SEXP p); // R_SetExternalPtrProtected used 9 times in PopGenome, rstream, Rlabkey, Rcpp, WhopGenome, Rcpp11
typedef void (*R_CFinalizer_t)(SEXP);
void R_RegisterFinalizer(SEXP s, SEXP fun); // R_RegisterFinalizer used 1 times in XML
void R_RegisterCFinalizer(SEXP s, R_CFinalizer_t fun); // R_RegisterCFinalizer used 73 times in 27 packages
void R_RegisterFinalizerEx(SEXP s, SEXP fun, Rboolean onexit); // R_RegisterFinalizerEx unused
void R_RegisterCFinalizerEx(SEXP s, R_CFinalizer_t fun, Rboolean onexit); // R_RegisterCFinalizerEx used 152 times in 58 packages
void R_RunPendingFinalizers(void); // R_RunPendingFinalizers unused
SEXP R_MakeWeakRef(SEXP key, SEXP val, SEXP fin, Rboolean onexit); // R_MakeWeakRef used 4 times in igraph, svd
SEXP R_MakeWeakRefC(SEXP key, SEXP val, R_CFinalizer_t fin, Rboolean onexit); // R_MakeWeakRefC unused
SEXP R_WeakRefKey(SEXP w); // R_WeakRefKey used 3 times in igraph, Rcpp, Rcpp11
SEXP R_WeakRefValue(SEXP w); // R_WeakRefValue used 7 times in igraph, Rcpp, svd, Rcpp11
void R_RunWeakRefFinalizer(SEXP w); // R_RunWeakRefFinalizer used 1 times in igraph
SEXP R_PromiseExpr(SEXP); // R_PromiseExpr unused
SEXP R_ClosureExpr(SEXP); // R_ClosureExpr unused
void R_initialize_bcode(void); // R_initialize_bcode unused
SEXP R_bcEncode(SEXP); // R_bcEncode unused
SEXP R_bcDecode(SEXP); // R_bcDecode unused
Rboolean R_ToplevelExec(void (*fun)(void *), void *data);
SEXP R_ExecWithCleanup(SEXP (*fun)(void *), void *data,
void (*cleanfun)(void *), void *cleandata);
void R_RestoreHashCount(SEXP rho); // R_RestoreHashCount unused
Rboolean R_IsPackageEnv(SEXP rho); // R_IsPackageEnv unused
SEXP R_PackageEnvName(SEXP rho); // R_PackageEnvName unused
SEXP R_FindPackageEnv(SEXP info); // R_FindPackageEnv unused
Rboolean R_IsNamespaceEnv(SEXP rho); // R_IsNamespaceEnv unused
SEXP R_NamespaceEnvSpec(SEXP rho); // R_NamespaceEnvSpec unused
SEXP R_FindNamespace(SEXP info); // R_FindNamespace used 14 times in 11 packages
void R_LockEnvironment(SEXP env, Rboolean bindings); // R_LockEnvironment used 2 times in Rcpp, Rcpp11
Rboolean R_EnvironmentIsLocked(SEXP env); // R_EnvironmentIsLocked used 2 times in Rcpp, Rcpp11
void R_LockBinding(SEXP sym, SEXP env); // R_LockBinding used 3 times in data.table, Rcpp, Rcpp11
void R_unLockBinding(SEXP sym, SEXP env); // R_unLockBinding used 2 times in Rcpp, Rcpp11
void R_MakeActiveBinding(SEXP sym, SEXP fun, SEXP env); // R_MakeActiveBinding unused
Rboolean R_BindingIsLocked(SEXP sym, SEXP env); // R_BindingIsLocked used 2 times in Rcpp, Rcpp11
Rboolean R_BindingIsActive(SEXP sym, SEXP env); // R_BindingIsActive used 2 times in Rcpp, Rcpp11
Rboolean R_HasFancyBindings(SEXP rho); // R_HasFancyBindings unused
void Rf_errorcall(SEXP, const char *, ...) __attribute__((noreturn)); // Rf_errorcall used 27 times in purrr, mongolite, jsonlite, pbdMPI, rJava, openssl
// errorcall used 103 times in RCurl, arules, XML, arulesSequences, pbdMPI, xts, proxy, cba, rJava, RSAP
void Rf_warningcall(SEXP, const char *, ...); // Rf_warningcall used 5 times in pbdMPI, mongolite
// warningcall used 4 times in RInside, jsonlite, pbdMPI
void Rf_warningcall_immediate(SEXP, const char *, ...); // Rf_warningcall_immediate used 2 times in mongolite, V8
// warningcall_immediate used 2 times in Runuran
void R_XDREncodeDouble(double d, void *buf); // R_XDREncodeDouble unused
double R_XDRDecodeDouble(void *buf); // R_XDRDecodeDouble unused
void R_XDREncodeInteger(int i, void *buf); // R_XDREncodeInteger unused
int R_XDRDecodeInteger(void *buf); // R_XDRDecodeInteger unused
typedef void *R_pstream_data_t;
typedef enum {
R_pstream_any_format,
R_pstream_ascii_format,
R_pstream_binary_format,
R_pstream_xdr_format,
R_pstream_asciihex_format
} R_pstream_format_t; // R_pstream_format_t used 7 times in RApiSerialize, Rhpc, fastdigest
typedef struct R_outpstream_st *R_outpstream_t;
struct R_outpstream_st {
R_pstream_data_t data;
R_pstream_format_t type;
int version;
void (*OutChar)(R_outpstream_t, int);
void (*OutBytes)(R_outpstream_t, void *, int);
SEXP (*OutPersistHookFunc)(SEXP, SEXP);
SEXP OutPersistHookData; // OutPersistHookData unused
};
typedef struct R_inpstream_st *R_inpstream_t;
struct R_inpstream_st {
R_pstream_data_t data;
R_pstream_format_t type;
int (*InChar)(R_inpstream_t);
void (*InBytes)(R_inpstream_t, void *, int);
SEXP (*InPersistHookFunc)(SEXP, SEXP);
SEXP InPersistHookData; // InPersistHookData unused
};
void R_InitInPStream(R_inpstream_t stream, R_pstream_data_t data, // R_InitInPStream used 2 times in RApiSerialize, Rhpc
R_pstream_format_t type,
int (*inchar)(R_inpstream_t),
void (*inbytes)(R_inpstream_t, void *, int),
SEXP (*phook)(SEXP, SEXP), SEXP pdata);
void R_InitOutPStream(R_outpstream_t stream, R_pstream_data_t data, // R_InitOutPStream used 4 times in RApiSerialize, Rhpc, fastdigest, qtbase
R_pstream_format_t type, int version,
void (*outchar)(R_outpstream_t, int),
void (*outbytes)(R_outpstream_t, void *, int),
SEXP (*phook)(SEXP, SEXP), SEXP pdata);
void R_InitFileInPStream(R_inpstream_t stream, FILE *fp, // R_InitFileInPStream used 1 times in filehash
R_pstream_format_t type,
SEXP (*phook)(SEXP, SEXP), SEXP pdata);
void R_InitFileOutPStream(R_outpstream_t stream, FILE *fp, // R_InitFileOutPStream unused
R_pstream_format_t type, int version,
SEXP (*phook)(SEXP, SEXP), SEXP pdata);
void R_Serialize(SEXP s, R_outpstream_t ops); // R_Serialize used 4 times in RApiSerialize, Rhpc, fastdigest, qtbase
SEXP R_Unserialize(R_inpstream_t ips); // R_Unserialize used 4 times in RApiSerialize, Rhpc, filehash
SEXP R_do_slot(SEXP obj, SEXP name); // R_do_slot used 162 times in 11 packages
SEXP R_do_slot_assign(SEXP obj, SEXP name, SEXP value); // R_do_slot_assign used 17 times in excel.link, redland, Rcpp11, Matrix, TMB, Rcpp, FREGAT, HiPLARM, OpenMx, rJPSGCS
int R_has_slot(SEXP obj, SEXP name); // R_has_slot used 14 times in Matrix, Rcpp, HiPLARM, OpenMx, Rcpp11
SEXP R_do_MAKE_CLASS(const char *what); // R_do_MAKE_CLASS used 6 times in TMB, Rcpp, Rcpp11
SEXP R_getClassDef (const char *what); // R_getClassDef used 5 times in memisc, Rcpp, Rcpp11
SEXP R_getClassDef_R(SEXP what); // R_getClassDef_R unused
Rboolean R_has_methods_attached(void); // R_has_methods_attached unused
Rboolean R_isVirtualClass(SEXP class_def, SEXP env); // R_isVirtualClass unused
Rboolean R_extends (SEXP class1, SEXP class2, SEXP env); // R_extends unused
SEXP R_do_new_object(SEXP class_def); // R_do_new_object used 9 times in TMB, memisc, Rcpp, Rcpp11
int R_check_class_and_super(SEXP x, const char **valid, SEXP rho); // R_check_class_and_super used 5 times in Matrix, Rmosek, HiPLARM
int R_check_class_etc (SEXP x, const char **valid); // R_check_class_etc used 41 times in Matrix, HiPLARM
void R_PreserveObject(SEXP); // R_PreserveObject used 112 times in 29 packages
void R_ReleaseObject(SEXP); // R_ReleaseObject used 114 times in 27 packages
void R_dot_Last(void); // R_dot_Last used 4 times in RInside, rJava, littler
void R_RunExitFinalizers(void); // R_RunExitFinalizers used 4 times in RInside, TMB, rJava, littler
int R_system(const char *); // R_system used 1 times in rJava
Rboolean R_compute_identical(SEXP, SEXP, int); // R_compute_identical used 14 times in igraph, Matrix, rgp, data.table
void R_orderVector(int *indx, int n, SEXP arglist, Rboolean nalast, Rboolean decreasing); // R_orderVector used 5 times in glpkAPI, nontarget, CEGO
SEXP Rf_allocVector(SEXPTYPE, R_xlen_t); // Rf_allocVector used 1086 times in 59 packages
// allocVector used 12419 times in 551 packages
Rboolean Rf_conformable(SEXP, SEXP); // Rf_conformable unused
// conformable used 141 times in 22 packages
SEXP Rf_elt(SEXP, int); // Rf_elt unused
// elt used 2310 times in 37 packages
Rboolean Rf_inherits(SEXP, const char *); // Rf_inherits used 530 times in 21 packages
// inherits used 814 times in 80 packages
Rboolean Rf_isArray(SEXP); // Rf_isArray unused
// isArray used 34 times in checkmate, PythonInR, data.table, ifultools, Rblpapi, Rvcg, unfoldr, TMB, kza, qtbase
Rboolean Rf_isFactor(SEXP); // Rf_isFactor used 22 times in 11 packages
// isFactor used 42 times in checkmate, rggobi, PythonInR, data.table, Kmisc, partykit, cba, qtbase, RSQLite
Rboolean Rf_isFrame(SEXP); // Rf_isFrame used 1 times in OpenMx
// isFrame used 15 times in checkmate, splusTimeDate, OjaNP, PythonInR, data.table, robfilter
Rboolean Rf_isFunction(SEXP); // Rf_isFunction used 4 times in Rserve, genie, RcppClassic
// isFunction used 274 times in 43 packages
Rboolean Rf_isInteger(SEXP); // Rf_isInteger used 39 times in 14 packages
// isInteger used 402 times in 77 packages
Rboolean Rf_isLanguage(SEXP); // Rf_isLanguage unused
// isLanguage used 63 times in PythonInR, rgp, RandomFields
Rboolean Rf_isList(SEXP); // Rf_isList unused
// isList used 40 times in 11 packages
Rboolean Rf_isMatrix(SEXP); // Rf_isMatrix used 55 times in 16 packages
// isMatrix used 293 times in 65 packages
Rboolean Rf_isNewList(SEXP); // Rf_isNewList used 6 times in Rmosek, RcppClassic
// isNewList used 103 times in 27 packages
Rboolean Rf_isNumber(SEXP); // Rf_isNumber unused
// isNumber used 14 times in PythonInR, readr, stringi, qtbase
Rboolean Rf_isNumeric(SEXP); // Rf_isNumeric used 31 times in Rmosek, gaselect, RcppCNPy, genie, mets, Morpho, rstan, Rcpp, RcppClassic, OpenMx
// isNumeric used 468 times in 49 packages
Rboolean Rf_isPairList(SEXP); // Rf_isPairList unused
// isPairList used 2 times in PythonInR
Rboolean Rf_isPrimitive(SEXP); // Rf_isPrimitive unused
// isPrimitive used 7 times in PythonInR, qtbase
Rboolean Rf_isTs(SEXP); // Rf_isTs unused
// isTs used 2 times in PythonInR
Rboolean Rf_isUserBinop(SEXP); // Rf_isUserBinop unused
// isUserBinop used 2 times in PythonInR
Rboolean Rf_isValidString(SEXP); // Rf_isValidString unused
// isValidString used 26 times in SSN, PythonInR, foreign, pbdMPI, RJSONIO, SASxport
Rboolean Rf_isValidStringF(SEXP); // Rf_isValidStringF unused
// isValidStringF used 2 times in PythonInR
Rboolean Rf_isVector(SEXP); // Rf_isVector used 15 times in RProtoBuf, RcppCNPy, stringi, purrr, RcppClassic, OpenMx, adaptivetau
// isVector used 182 times in 46 packages
Rboolean Rf_isVectorAtomic(SEXP); // Rf_isVectorAtomic used 13 times in agop, tidyr, reshape2, stringi
// isVectorAtomic used 40 times in bit, matrixStats, checkmate, PythonInR, data.table, Matrix, bit64, potts, aster2, qtbase
Rboolean Rf_isVectorList(SEXP); // Rf_isVectorList used 23 times in genie, purrr, RNiftyReg, stringi
// isVectorList used 12 times in RPostgreSQL, spsurvey, PythonInR, stringi, adaptivetau, PCICt, RandomFields
Rboolean Rf_isVectorizable(SEXP); // Rf_isVectorizable unused
// isVectorizable used 3 times in PythonInR, robfilter
SEXP Rf_lang1(SEXP); // Rf_lang1 used 14 times in PopGenome, WhopGenome, nontarget, Rcpp11, purrr, Rcpp, CEGO
// lang1 used 30 times in 11 packages
SEXP Rf_lang2(SEXP, SEXP); // Rf_lang2 used 64 times in 13 packages
// lang2 used 216 times in 75 packages
SEXP Rf_lang3(SEXP, SEXP, SEXP); // Rf_lang3 used 19 times in purrr, RcppDE, Rcpp, lbfgs, emdist, Rcpp11
// lang3 used 107 times in 28 packages
SEXP Rf_lang4(SEXP, SEXP, SEXP, SEXP); // Rf_lang4 used 8 times in lme4, purrr, Rcpp, diversitree, Rcpp11
// lang4 used 65 times in 21 packages
SEXP Rf_lang5(SEXP, SEXP, SEXP, SEXP, SEXP); // Rf_lang5 unused
// lang5 used 11 times in PBSddesolve, GNE, SMC
SEXP Rf_lang6(SEXP, SEXP, SEXP, SEXP, SEXP, SEXP); // Rf_lang6 used 1 times in lme4
// lang6 used 2 times in GNE
SEXP Rf_lastElt(SEXP); // Rf_lastElt unused
// lastElt unused
SEXP Rf_lcons(SEXP, SEXP); // Rf_lcons used 26 times in purrr, rcppbugs, Rcpp, pryr
// lcons used 16 times in rmgarch
R_len_t Rf_length(SEXP); // Rf_length used 662 times in 69 packages
SEXP Rf_list1(SEXP); // Rf_list1 used 1 times in Rcpp
// list1 used 197 times in 11 packages
SEXP Rf_list2(SEXP, SEXP); // Rf_list2 unused
// list2 used 441 times in 12 packages
SEXP Rf_list3(SEXP, SEXP, SEXP); // Rf_list3 unused
// list3 used 72 times in marked, Rdsdp, BH, svd
SEXP Rf_list4(SEXP, SEXP, SEXP, SEXP); // Rf_list4 unused
// list4 used 58 times in igraph, PBSddesolve, Rserve, BH, yaml, treethresh, SMC
SEXP Rf_list5(SEXP, SEXP, SEXP, SEXP, SEXP); // Rf_list5 unused
// list5 used 63 times in Rdsdp, BH
SEXP Rf_listAppend(SEXP, SEXP); // Rf_listAppend unused
// listAppend used 1 times in ore
SEXP Rf_mkNamed(SEXPTYPE, const char **); // Rf_mkNamed used 8 times in Matrix, gmp, RSclient, HiPLARM
// mkNamed used 12 times in RCassandra, coxme, SamplerCompare, survival, JavaGD, DEoptim, qtbase
SEXP Rf_mkString(const char *); // Rf_mkString used 179 times in 24 packages
// mkString used 814 times in 96 packages
int Rf_nlevels(SEXP); // Rf_nlevels unused
// nlevels used 546 times in 26 packages
int Rf_stringPositionTr(SEXP, const char *); // Rf_stringPositionTr unused
// stringPositionTr unused
SEXP Rf_ScalarComplex(Rcomplex); // Rf_ScalarComplex unused
// ScalarComplex unused
SEXP Rf_ScalarInteger(int); // Rf_ScalarInteger used 390 times in 20 packages
// ScalarInteger used 704 times in 88 packages
SEXP Rf_ScalarLogical(int); // Rf_ScalarLogical used 160 times in 20 packages
// ScalarLogical used 450 times in 64 packages
SEXP Rf_ScalarRaw(Rbyte); // Rf_ScalarRaw unused
// ScalarRaw used 4 times in qtbase, RGtk2
SEXP Rf_ScalarReal(double); // Rf_ScalarReal used 146 times in 19 packages
// ScalarReal used 330 times in 65 packages
SEXP Rf_ScalarString(SEXP); // Rf_ScalarString used 33 times in agop, Nippon, Rcpp11, rpf, stringi, purrr, Rcpp
// ScalarString used 198 times in 37 packages
R_xlen_t Rf_xlength(SEXP); // Rf_xlength used 46 times in WGCNA, Rcpp, Rcpp11
SEXP Rf_protect(SEXP); // Rf_protect used 332 times in 12 packages
// protect used 599 times in 101 packages
void Rf_unprotect(int); // Rf_unprotect used 289 times in 12 packages
// unprotect used 110 times in 35 packages
void R_ProtectWithIndex(SEXP, PROTECT_INDEX *); // R_ProtectWithIndex used 8 times in OpenMx
void R_Reprotect(SEXP, PROTECT_INDEX); // R_Reprotect used 2 times in OpenMx
SEXP R_FixupRHS(SEXP x, SEXP y); // R_FixupRHS unused
}
# /Users/ls/Source/git/fastr/com.oracle.truffle.r.native/include/Rmath.h
extern "C" {
double R_pow(double x, double y); // R_pow used 1521 times in 72 packages
double R_pow_di(double, int); // R_pow_di used 384 times in 53 packages
double norm_rand(void); // norm_rand used 408 times in 93 packages
double unif_rand(void); // unif_rand used 2135 times in 327 packages
double exp_rand(void); // exp_rand used 110 times in 25 packages
double Rf_dnorm4(double, double, double, int); // Rf_dnorm4 used 45 times in 13 packages
// dnorm4 used 27 times in 11 packages
// dnorm used 1377 times in 151 packages
double Rf_pnorm5(double, double, double, int, int); // Rf_pnorm5 used 143 times in 19 packages
// pnorm used 1582 times in 159 packages
// pnorm5 used 77 times in 12 packages
double Rf_qnorm5(double, double, double, int, int); // Rf_qnorm5 used 40 times in 13 packages
// qnorm5 used 30 times in igraph, PwrGSD, geepack, robustvarComp, Rcpp11, tpr, Rcpp
// qnorm used 444 times in 96 packages
double Rf_rnorm(double, double); // Rf_rnorm used 85 times in 13 packages
// rnorm used 1865 times in 198 packages
void Rf_pnorm_both(double, double *, double *, int, int); // Rf_pnorm_both used 4 times in Rcpp, Rcpp11
// pnorm_both used 12 times in MCMCpack, MasterBayes, Rcpp, phcfM, gof, Rcpp11
double Rf_dunif(double, double, double, int); // Rf_dunif used 4 times in Rcpp, Rcpp11
// dunif used 120 times in 18 packages
double Rf_punif(double, double, double, int, int); // Rf_punif used 4 times in Rcpp, Rcpp11
// punif used 70 times in 11 packages
double Rf_qunif(double, double, double, int, int); // Rf_qunif used 3 times in Rcpp, Rcpp11
// qunif used 14 times in RInside, qrjoint, Rcpp, Rcpp11, littler
double Rf_runif(double, double); // Rf_runif used 112 times in 19 packages
// runif used 2810 times in 273 packages
double Rf_dgamma(double, double, double, int); // Rf_dgamma used 13 times in lme4, epinet, Rcpp, rtkpp, rtkore, Rcpp11
// dgamma used 617 times in 57 packages
double Rf_pgamma(double, double, double, int, int); // Rf_pgamma used 31 times in TMB, Rcpp, rtkpp, BayesFactor, rtkore, Rcpp11
// pgamma used 164 times in 40 packages
double Rf_qgamma(double, double, double, int, int); // Rf_qgamma used 12 times in TMB, Rcpp, rtkpp, BayesFactor, rtkore, Rcpp11
// qgamma used 58 times in 25 packages
double Rf_rgamma(double, double); // Rf_rgamma used 88 times in 14 packages
// rgamma used 786 times in 104 packages
double Rf_log1pmx(double); // Rf_log1pmx used 2 times in Rcpp, Rcpp11
// log1pmx used 20 times in DPpackage, BH, Rcpp, Rcpp11
double log1pexp(double); // log1pexp used 4 times in Rcpp, Rcpp11
double Rf_lgamma1p(double); // Rf_lgamma1p used 3 times in OpenMx, Rcpp, Rcpp11
// lgamma1p used 14 times in Rcpp, OpenMx, ergm.count, heavy, mixAK, Rcpp11
double Rf_logspace_add(double, double); // Rf_logspace_add used 2 times in Rcpp, Rcpp11
// logspace_add used 21 times in sna, BMN, Rcpp11, RxCEcolInf, SamplerCompare, STAR, Rcpp
double Rf_logspace_sub(double, double); // Rf_logspace_sub used 2 times in Rcpp, Rcpp11
// logspace_sub used 16 times in sna, Rcpp11, SamplerCompare, truncnorm, STAR, Rcpp, bfp
double logspace_sum(double *, int); // logspace_sum unused
double Rf_dbeta(double, double, double, int); // Rf_dbeta used 14 times in Rcpp, OpenMx, rtkpp, SBSA, rtkore, Rcpp11
// dbeta used 377 times in 54 packages
double Rf_pbeta(double, double, double, int, int); // Rf_pbeta used 24 times in Rcpp, bcp, OpenMx, rtkpp, rtkore, Rcpp11
// pbeta used 262 times in 39 packages
double Rf_qbeta(double, double, double, int, int); // Rf_qbeta used 7 times in Rcpp, rtkpp, rtkore, Rcpp11
// qbeta used 57 times in 17 packages
double Rf_rbeta(double, double); // Rf_rbeta used 14 times in bfa, spBayesSurv, RcppSMC, Rcpp11, Rcpp, rtkpp, rtkore
// rbeta used 431 times in 59 packages
double Rf_dlnorm(double, double, double, int); // Rf_dlnorm used 13 times in Rcpp, rtkpp, RcppProgress, rtkore, Rcpp11
// dlnorm used 68 times in 22 packages
double Rf_plnorm(double, double, double, int, int); // Rf_plnorm used 7 times in Rcpp, rtkpp, rtkore, Rcpp11
// plnorm used 37 times in 14 packages
double Rf_qlnorm(double, double, double, int, int); // Rf_qlnorm used 7 times in Rcpp, rtkpp, rtkore, Rcpp11
// qlnorm used 11 times in icenReg, RInside, Rcpp, Rcpp11, littler
double Rf_rlnorm(double, double); // Rf_rlnorm used 7 times in Rcpp, rtkpp, RcppSMC, rtkore, Rcpp11
// rlnorm used 64 times in 18 packages
double Rf_dchisq(double, double, int); // Rf_dchisq used 11 times in Rcpp, rtkpp, rtkore, Rcpp11
// dchisq used 57 times in 14 packages
double Rf_pchisq(double, double, int, int); // Rf_pchisq used 9 times in Rcpp, rtkpp, rtkore, lm.br, Rcpp11
// pchisq used 152 times in 33 packages
double Rf_qchisq(double, double, int, int); // Rf_qchisq used 13 times in robustHD, ccaPP, lm.br, Rcpp11, Rcpp, rtkpp, rtkore
// qchisq used 38 times in 21 packages
double Rf_rchisq(double); // Rf_rchisq used 14 times in bfa, MixedDataImpute, rmgarch, Rcpp11, lme4, Rcpp, rtkpp, rtkore
// rchisq used 244 times in 54 packages
double Rf_dnchisq(double, double, double, int); // Rf_dnchisq used 3 times in Rcpp, Rcpp11
// dnchisq used 7 times in spc, Rcpp, Rcpp11
double Rf_pnchisq(double, double, double, int, int); // Rf_pnchisq used 3 times in Rcpp, Rcpp11
// pnchisq used 13 times in spc, Rcpp, Rcpp11
double Rf_qnchisq(double, double, double, int, int); // Rf_qnchisq used 3 times in Rcpp, Rcpp11
// qnchisq used 9 times in spc, Rcpp, Rcpp11
double Rf_rnchisq(double, double); // Rf_rnchisq used 2 times in Rcpp, Rcpp11
// rnchisq used 11 times in Rcpp, Rcpp11
double Rf_df(double, double, double, int); // Rf_df used 12 times in Rcpp, subplex, rtkpp, rtkore, Rcpp11
// df unused
double Rf_pf(double, double, double, int, int); // Rf_pf used 13 times in BIFIEsurvey, Rcpp, rtkpp, rtkore, lm.br, Rcpp11
// pf unused
double Rf_qf(double, double, double, int, int); // Rf_qf used 9 times in Rcpp, rtkpp, rtkore, lm.br, Rcpp11
// qf unused
double Rf_rf(double, double); // Rf_rf used 6 times in Rcpp, rtkpp, rtkore, Rcpp11
// rf unused
double Rf_dt(double, double, int); // Rf_dt used 12 times in TMB, Rcpp, rtkpp, rtkore, Rcpp11
// dt unused
double Rf_pt(double, double, int, int); // Rf_pt used 8 times in Rcpp, rtkpp, rtkore, lm.br, Rcpp11
// pt unused
double Rf_qt(double, double, int, int); // Rf_qt used 7 times in Rcpp, rtkpp, rtkore, Rcpp11
// qt unused
double Rf_rt(double); // Rf_rt used 7 times in Rcpp, rtkpp, RcppSMC, rtkore, Rcpp11
// rt unused
double Rf_dbinom_raw(double x, double n, double p, double q, int give_log); // Rf_dbinom_raw unused
// dbinom_raw used 50 times in igraph, MCMCpack, secr, AdaptFitOS, phcfM, gof, MasterBayes, locfit
double Rf_dbinom(double, double, double, int); // Rf_dbinom used 23 times in mvabund, Rcpp11, rgam, lme4, unmarked, Rcpp, rtkpp, BayesFactor, rtkore
// dbinom used 290 times in 40 packages
double Rf_pbinom(double, double, double, int, int); // Rf_pbinom used 10 times in Rcpp, rtkpp, mvabund, rtkore, Rcpp11, rgam
// pbinom used 53 times in 16 packages
double Rf_qbinom(double, double, double, int, int); // Rf_qbinom used 9 times in Rcpp, rtkpp, mvabund, rtkore, Rcpp11
// qbinom used 18 times in DPpackage, Runuran, BayesXsrc, mvabund, Rcpp11, RInside, Rcpp, ump, littler
double Rf_rbinom(double, double); // Rf_rbinom used 14 times in igraph, mvabund, Rcpp11, Rcpp, rtkpp, rtkore, RcppArmadillo
// rbinom used 169 times in 50 packages
void Rf_rmultinom(int, double*, int, int*); // Rf_rmultinom unused
// rmultinom used 42 times in 18 packages
double Rf_dcauchy(double, double, double, int); // Rf_dcauchy used 15 times in lme4, Rcpp, rtkpp, BayesFactor, rtkore, Rcpp11
// dcauchy used 25 times in DPpackage, multimark, vcrpart, kernlab, Rcpp11, RInside, Rcpp, aucm, ordinal, littler
double Rf_pcauchy(double, double, double, int, int); // Rf_pcauchy used 10 times in lme4, Rcpp, rtkpp, rtkore, Rcpp11
// pcauchy used 25 times in DPpackage, vcrpart, Rcpp11, RInside, Rcpp, ordinal, RandomFields, littler
double Rf_qcauchy(double, double, double, int, int); // Rf_qcauchy used 10 times in lme4, Rcpp, rtkpp, rtkore, Rcpp11
// qcauchy used 11 times in RInside, DPpackage, Rcpp, Rcpp11, littler
double Rf_rcauchy(double, double); // Rf_rcauchy used 7 times in Rcpp, rtkpp, RcppSMC, rtkore, Rcpp11
// rcauchy used 21 times in PoweR, RInside, Rcpp, DEoptim, Rcpp11, littler
double Rf_dexp(double, double, int); // Rf_dexp used 12 times in unmarked, Rcpp, rtkpp, rtkore, Rcpp11
// dexp used 646 times in 82 packages
double Rf_pexp(double, double, int, int); // Rf_pexp used 11 times in unmarked, Rcpp, rtkpp, BayesFactor, rtkore, Rcpp11
// pexp used 117 times in 26 packages
double Rf_qexp(double, double, int, int); // Rf_qexp used 7 times in Rcpp, rtkpp, rtkore, Rcpp11
// qexp used 20 times in monomvn, GeoGenetix, Rcpp11, icenReg, RInside, TMB, Rcpp, Sunder, RandomFields, littler
double Rf_rexp(double); // Rf_rexp used 20 times in iBATCGH, RcppSMC, rmgarch, Rcpp11, wrswoR, Rcpp, rtkpp, BayesFactor, rtkore
// rexp used 224 times in 56 packages
double Rf_dgeom(double, double, int); // Rf_dgeom used 11 times in Rcpp, rtkpp, rtkore, Rcpp11
// dgeom used 16 times in RInside, Rcpp, ergm.count, Rcpp11, littler
double Rf_pgeom(double, double, int, int); // Rf_pgeom used 7 times in Rcpp, rtkpp, rtkore, Rcpp11
// pgeom used 10 times in RInside, Rcpp, Rcpp11, littler
double Rf_qgeom(double, double, int, int); // Rf_qgeom used 7 times in Rcpp, rtkpp, rtkore, Rcpp11
// qgeom used 10 times in RInside, Rcpp, Rcpp11, littler
double Rf_rgeom(double); // Rf_rgeom used 13 times in igraph, Rcpp, iBATCGH, rtkpp, rtkore, Rcpp11
// rgeom used 25 times in BSquare, sna, ergm.count, Rcpp11, RInside, Rcpp, littler
double Rf_dhyper(double, double, double, double, int); // Rf_dhyper used 11 times in Rcpp, rtkpp, rtkore, Rcpp11
// dhyper used 14 times in AdaptFitOS, Rcpp11, RInside, Rcpp, CorrBin, locfit, littler
double Rf_phyper(double, double, double, double, int, int); // Rf_phyper used 7 times in Rcpp, rtkpp, rtkore, Rcpp11
// phyper used 17 times in Runuran, Rcpp11, cpm, RInside, Rcpp, RandomFields, vegan, littler
double Rf_qhyper(double, double, double, double, int, int); // Rf_qhyper used 7 times in Rcpp, rtkpp, rtkore, Rcpp11
// qhyper used 11 times in RInside, Runuran, Rcpp, Rcpp11, littler
double Rf_rhyper(double, double, double); // Rf_rhyper used 8 times in Rcpp, rtkpp, rtkore, Rcpp11
// rhyper used 13 times in kSamples, RInside, Rcpp, Rcpp11, littler
double Rf_dnbinom(double, double, double, int); // Rf_dnbinom used 22 times in inarmix, Rcpp, rtkpp, mvabund, rtkore, Rcpp11
// dnbinom used 170 times in 27 packages
double Rf_pnbinom(double, double, double, int, int); // Rf_pnbinom used 10 times in Rcpp, rtkpp, mvabund, rtkore, Rcpp11
// pnbinom used 29 times in 13 packages
double Rf_qnbinom(double, double, double, int, int); // Rf_qnbinom used 10 times in Rcpp, rtkpp, mvabund, rtkore, Rcpp11
// qnbinom used 12 times in RInside, Runuran, Rcpp, mvabund, Rcpp11, littler
double Rf_rnbinom(double, double); // Rf_rnbinom used 9 times in Rcpp, rtkpp, mvabund, rtkore, Rcpp11
// rnbinom used 41 times in 18 packages
double Rf_dnbinom_mu(double, double, double, int); // Rf_dnbinom_mu used 1 times in Rcpp
// dnbinom_mu used 18 times in RDS, KFAS, Rcpp11, unmarked, Rcpp, sspse, Bclim
double Rf_pnbinom_mu(double, double, double, int, int); // Rf_pnbinom_mu used 1 times in Rcpp
// pnbinom_mu used 3 times in Rcpp, Rcpp11
double Rf_qnbinom_mu(double, double, double, int, int); // Rf_qnbinom_mu used 1 times in Rcpp
// qnbinom_mu used 3 times in Rcpp, Rcpp11
double Rf_rnbinom_mu(double, double); // Rf_rnbinom_mu used 1 times in Rcpp
// rnbinom_mu used 7 times in Rcpp, Rcpp11
double Rf_dpois_raw (double, double, int); // Rf_dpois_raw unused
// dpois_raw used 25 times in igraph, MCMCpack, AdaptFitOS, phcfM, gof, MasterBayes, locfit
double Rf_dpois(double, double, int); // Rf_dpois used 28 times in mvabund, Rcpp11, rgam, lme4, unmarked, Rcpp, rtkpp, rtkore
// dpois used 212 times in 37 packages
double Rf_ppois(double, double, int, int); // Rf_ppois used 13 times in mvabund, Rcpp11, rgam, TMB, Rcpp, rtkpp, rtkore
// ppois used 62 times in 18 packages
double Rf_qpois(double, double, int, int); // Rf_qpois used 10 times in Rcpp, rtkpp, mvabund, rtkore, Rcpp11
// qpois used 23 times in 11 packages
double Rf_rpois(double); // Rf_rpois used 22 times in mvabund, Rcpp11, Rcpp, RcppOctave, fwsim, rtkpp, rtkore
// rpois used 157 times in 51 packages
double Rf_dweibull(double, double, double, int); // Rf_dweibull used 11 times in Rcpp, rtkpp, rtkore, Rcpp11
// dweibull used 38 times in 16 packages
double Rf_pweibull(double, double, double, int, int); // Rf_pweibull used 7 times in Rcpp, rtkpp, rtkore, Rcpp11
// pweibull used 42 times in 14 packages
double Rf_qweibull(double, double, double, int, int); // Rf_qweibull used 7 times in Rcpp, rtkpp, rtkore, Rcpp11
// qweibull used 16 times in BSquare, Rcpp11, icenReg, RInside, TMB, extWeibQuant, Rcpp, littler
double Rf_rweibull(double, double); // Rf_rweibull used 6 times in Rcpp, rtkpp, rtkore, Rcpp11
// rweibull used 35 times in 12 packages
double Rf_dlogis(double, double, double, int); // Rf_dlogis used 14 times in lme4, Rcpp, rtkpp, BayesFactor, rtkore, Rcpp11
// dlogis used 91 times in 18 packages
double Rf_plogis(double, double, double, int, int); // Rf_plogis used 8 times in lme4, Rcpp, rtkpp, rtkore, Rcpp11
// plogis used 125 times in 21 packages
double Rf_qlogis(double, double, double, int, int); // Rf_qlogis used 9 times in lme4, Rcpp, rtkpp, BayesFactor, rtkore, Rcpp11
// qlogis used 16 times in DPpackage, geoBayes, Rcpp11, RInside, TMB, qrjoint, Rcpp, littler
double Rf_rlogis(double, double); // Rf_rlogis used 10 times in Rcpp, rtkpp, BayesFactor, rtkore, Rcpp11
// rlogis used 32 times in MCMCpack, phcfM, gof, Rcpp11, MasterBayes, PoweR, RInside, Rcpp, littler
double Rf_dnbeta(double, double, double, double, int); // Rf_dnbeta used 4 times in OpenMx, Rcpp, Rcpp11
// dnbeta used 6 times in Rcpp, Rcpp11
double Rf_pnbeta(double, double, double, double, int, int); // Rf_pnbeta used 4 times in OpenMx, Rcpp, Rcpp11
// pnbeta used 23 times in bayesSurv, Rcpp, Rcpp11
double Rf_qnbeta(double, double, double, double, int, int); // Rf_qnbeta used 3 times in Rcpp, Rcpp11
// qnbeta used 8 times in Rcpp, Rcpp11
double Rf_rnbeta(double, double, double); // Rf_rnbeta used 2 times in Rcpp, Rcpp11
// rnbeta used 4 times in Rcpp, Rcpp11
double Rf_dnf(double, double, double, double, int); // Rf_dnf used 3 times in Rcpp, Rcpp11
// dnf used 13 times in RxODE, Rcpp, Rcpp11
double Rf_pnf(double, double, double, double, int, int); // Rf_pnf used 3 times in Rcpp, Rcpp11
// pnf used 12 times in Rcpp, Rcpp11
double Rf_qnf(double, double, double, double, int, int); // Rf_qnf used 3 times in Rcpp, Rcpp11
// qnf used 8 times in Rcpp, Rcpp11
double Rf_dnt(double, double, double, int); // Rf_dnt used 4 times in BayesFactor, Rcpp, Rcpp11
// dnt used 17 times in alineR, DNAtools, gmum.r, Rcpp11, Rcpp, bayesLife, spc
double Rf_pnt(double, double, double, int, int); // Rf_pnt used 3 times in Rcpp, Rcpp11
// pnt used 111 times in BayesXsrc, hypervolume, Rcpp, spc, Rcpp11
double Rf_qnt(double, double, double, int, int); // Rf_qnt used 3 times in Rcpp, Rcpp11
// qnt used 12 times in ore, Rcpp, spc, Rcpp11
double Rf_ptukey(double, double, double, double, int, int); // Rf_ptukey used 2 times in Rcpp, Rcpp11
// ptukey used 6 times in RInside, Rcpp, Rcpp11, littler
double Rf_qtukey(double, double, double, double, int, int); // Rf_qtukey used 2 times in Rcpp, Rcpp11
// qtukey used 6 times in RInside, Rcpp, Rcpp11, littler
double Rf_dwilcox(double, double, double, int); // Rf_dwilcox used 2 times in Rcpp, Rcpp11
// dwilcox used 12 times in clinfun, fuzzyRankTests, Rcpp11, RInside, Rcpp, DescTools, littler
double Rf_pwilcox(double, double, double, int, int); // Rf_pwilcox used 2 times in Rcpp, Rcpp11
// pwilcox used 16 times in fuzzyRankTests, Rcpp11, FRESA.CAD, RInside, simctest, Rcpp, littler
double Rf_qwilcox(double, double, double, int, int); // Rf_qwilcox used 2 times in Rcpp, Rcpp11
// qwilcox used 10 times in RInside, Rcpp, Rcpp11, littler
double Rf_rwilcox(double, double); // Rf_rwilcox used 4 times in Rcpp, Rcpp11
// rwilcox used 11 times in RInside, Rcpp, Rcpp11, littler
double Rf_dsignrank(double, double, int); // Rf_dsignrank used 2 times in Rcpp, Rcpp11
// dsignrank used 7 times in RInside, Rcpp, fuzzyRankTests, Rcpp11, littler
double Rf_psignrank(double, double, int, int); // Rf_psignrank used 2 times in Rcpp, Rcpp11
// psignrank used 11 times in FRESA.CAD, RInside, Rcpp, fuzzyRankTests, Rcpp11, littler
double Rf_qsignrank(double, double, int, int); // Rf_qsignrank used 2 times in Rcpp, Rcpp11
// qsignrank used 6 times in RInside, Rcpp, Rcpp11, littler
double Rf_rsignrank(double); // Rf_rsignrank used 4 times in Rcpp, Rcpp11
// rsignrank used 11 times in RInside, Rcpp, Rcpp11, littler
double Rf_gammafn(double); // Rf_gammafn used 7 times in Rcpp, Rcpp11
// gammafn used 374 times in 46 packages
double Rf_lgammafn(double); // Rf_lgammafn used 61 times in epinet, spBayesSurv, AdaptFitOS, rmgarch, Rcpp11, icenReg, TMB, Rcpp, locfit, OpenMx
// lgammafn used 407 times in 66 packages
double Rf_lgammafn_sign(double, int*); // Rf_lgammafn_sign used 2 times in Rcpp, Rcpp11
// lgammafn_sign used 4 times in Rcpp, Rcpp11
void Rf_dpsifn(double, int, int, int, double*, int*, int*); // Rf_dpsifn used 2 times in Rcpp, Rcpp11
// dpsifn used 4 times in Rcpp, Rcpp11
double Rf_psigamma(double, double); // Rf_psigamma used 6 times in TMB, Rcpp, Rcpp11
// psigamma used 9 times in Rcpp, Rcpp11
double Rf_digamma(double); // Rf_digamma used 47 times in inarmix, stochvol, Rcpp, frailtySurv, Rcpp11
// digamma used 20689 times in 54 packages
double Rf_trigamma(double); // Rf_trigamma used 10 times in stochvol, Rcpp, frailtySurv, Rcpp11
// trigamma used 128 times in 24 packages
double Rf_tetragamma(double); // Rf_tetragamma used 5 times in Rcpp, Rcpp11
// tetragamma used 22 times in Rcpp, Rcpp11, RcppShark
double Rf_pentagamma(double); // Rf_pentagamma used 5 times in Rcpp, Rcpp11
// pentagamma used 8 times in Rcpp, Rcpp11
double Rf_beta(double, double); // Rf_beta used 8 times in Rcpp, iBATCGH, RandomFields, Rcpp11
// beta used 32773 times in 615 packages
double Rf_lbeta(double, double); // Rf_lbeta used 24 times in Rcpp, poisDoubleSamp, bcp, Rcpp11
// lbeta used 213 times in 23 packages
double Rf_choose(double, double); // Rf_choose used 9 times in DepthProc, Rcpp, bfp, polyfreqs, Rcpp11
// choose used 1368 times in 287 packages
double Rf_lchoose(double, double); // Rf_lchoose used 38 times in Rcpp, bfp, poisDoubleSamp, noncompliance, Rcpp11
// lchoose used 54 times in 17 packages
double Rf_bessel_i(double, double, double); // Rf_bessel_i used 3 times in OpenMx, Rcpp, Rcpp11
// bessel_i used 29 times in BiTrinA, Binarize, overlap, RCALI, Hankel, Rcpp11, rotations, Rcpp, moveHMM, dti
double Rf_bessel_j(double, double); // Rf_bessel_j used 3 times in OpenMx, Rcpp, Rcpp11
// bessel_j used 25 times in SpatialExtremes, constrainedKriging, BH, Rcpp, RandomFields, Rcpp11
double Rf_bessel_k(double, double, double); // Rf_bessel_k used 7 times in TMB, Rcpp, OpenMx, rmgarch, Rcpp11
// bessel_k used 127 times in 26 packages
double Rf_bessel_y(double, double); // Rf_bessel_y used 3 times in OpenMx, Rcpp, Rcpp11
// bessel_y used 4 times in Rcpp, Rcpp11
double Rf_bessel_i_ex(double, double, double, double *); // Rf_bessel_i_ex used 2 times in Rcpp, Rcpp11
// bessel_i_ex used 5 times in Rcpp, Rcpp11, dti
double Rf_bessel_j_ex(double, double, double *); // Rf_bessel_j_ex used 2 times in Rcpp, Rcpp11
// bessel_j_ex used 4 times in Rcpp, Rcpp11
double Rf_bessel_k_ex(double, double, double, double *); // Rf_bessel_k_ex used 2 times in Rcpp, Rcpp11
// bessel_k_ex used 9 times in geostatsp, Rcpp, tgp, Rcpp11
double Rf_bessel_y_ex(double, double, double *); // Rf_bessel_y_ex used 2 times in Rcpp, Rcpp11
// bessel_y_ex used 4 times in Rcpp, Rcpp11
double Rf_pythag(double, double); // Rf_pythag used 4 times in Rcpp, Rcpp11
// pythag used 105 times in 21 packages
int Rf_imax2(int, int); // Rf_imax2 used 2 times in Rcpp, Rcpp11
// imax2 used 150 times in 37 packages
int Rf_imin2(int, int); // Rf_imin2 used 2 times in Rcpp, Rcpp11
// imin2 used 193 times in 28 packages
double Rf_fmax2(double, double); // Rf_fmax2 used 2 times in Rcpp, Rcpp11
// fmax2 used 345 times in 60 packages
double Rf_fmin2(double, double); // Rf_fmin2 used 4 times in TMB, Rcpp, Rcpp11
// fmin2 used 224 times in 46 packages
double Rf_sign(double); // Rf_sign used 4 times in OpenMx, Rcpp, Rcpp11
// sign used 5291 times in 389 packages
double Rf_fprec(double, double); // Rf_fprec used 4 times in Rcpp, Rcpp11
// fprec used 38 times in wfe, Rcpp, msm, list, Rcpp11
double Rf_fround(double, double); // Rf_fround used 8 times in Rcpp, RcppClassic, Rcpp11
// fround used 13 times in bioPN, exactLoglinTest, frontiles, Rcpp11, FRESA.CAD, Rcpp, rmetasim, treethresh
double Rf_fsign(double, double); // Rf_fsign used 2 times in Rcpp, Rcpp11
// fsign used 66 times in 15 packages
double Rf_ftrunc(double); // Rf_ftrunc used 4 times in Rcpp, Rcpp11
// ftrunc used 123 times in 22 packages
double Rf_log1pmx(double); // Rf_log1pmx used 2 times in Rcpp, Rcpp11
// log1pmx used 20 times in DPpackage, BH, Rcpp, Rcpp11
double Rf_lgamma1p(double); // Rf_lgamma1p used 3 times in OpenMx, Rcpp, Rcpp11
// lgamma1p used 14 times in Rcpp, OpenMx, ergm.count, heavy, mixAK, Rcpp11
double cospi(double); // cospi used 1 times in Rmpfr
double sinpi(double); // sinpi used 1 times in Rmpfr
double tanpi(double); // tanpi used 1 times in Rmpfr
double Rf_logspace_add(double logx, double logy); // Rf_logspace_add used 2 times in Rcpp, Rcpp11
// logspace_add used 21 times in sna, BMN, Rcpp11, RxCEcolInf, SamplerCompare, STAR, Rcpp
double Rf_logspace_sub(double logx, double logy); // Rf_logspace_sub used 2 times in Rcpp, Rcpp11
// logspace_sub used 16 times in sna, Rcpp11, SamplerCompare, truncnorm, STAR, Rcpp, bfp
}
# /Users/ls/Source/git/fastr/com.oracle.truffle.r.native/include/S.h
extern "C" {
extern void seed_in(long *); // seed_in used 11 times in raster, excursions, IGM.MEA, GENLIB, VLMC, maptools, robust
extern void seed_out(long *); // seed_out used 7 times in GENLIB, raster, VLMC, maptools, robust, IGM.MEA
extern double unif_rand(void); // unif_rand used 2135 times in 327 packages
extern double norm_rand(void); // norm_rand used 408 times in 93 packages
typedef struct {
double re;
double im;
} S_complex; // S_complex used 2 times in ifultools
}
</pre>
== Stats ==
<pre>
0 1 2 3 4 5 6 7 8 9 10+
Macro: 129 12 15 12 12 4 8 3 0 7 208 (usage count)
(410) 129 34 20 12 22 12 9 11 5 6 150 (distinct package count)
Function: 259 32 35 25 33 15 18 16 11 9 351 (usage count)
(804) 259 65 50 41 48 27 29 13 9 21 242 (distinct package count)
Variable: 32 2 6 4 2 1 2 1 0 0 22 (usage count)
(72) 32 8 5 3 1 1 1 2 0 1 18 (distinct package count)
TypeDef: 10 0 1 2 0 0 0 2 2 0 13 (usage count)
(30) 10 1 2 2 0 1 0 2 2 0 10 (distinct package count)
Alias: 68 14 26 14 13 8 6 6 4 4 213 (usage count)
(376) 68 42 41 18 20 10 12 16 9 3 137 (distinct package count)
</pre>
(for a quick explanation of these stats see [[Native_API_stats_of_R.h]])
082666a1c60259540269f0ee8b24ed93fa7a73a3
R Certification
0
16
71
69
2017-03-17T16:01:48Z
Jpmurillo
17
wikitext
text/x-wiki
== Background ==
We have seen an exponential increase in the demand of R among a large and variant set of audiences. People from various domains are keen to learn it and further improve their skills. This has created a supply and demand gap which is being filled by various teaching channels. While there is no dearth of the amount of R teaching material, both in-class and online, there is still arguably a shortage in skilled R users who possess quality skills in R. This shortage of qualified personnel and abundance of self-taught data scientists leads to confusion for employers as well as prospective employees who have the required skill-set but no way to differentiate themselves.
== Proposal ==
There is no system today to certify qualified R Professionals. The R-Consortium, as the governing body for the R community, needs to step in as the neutral agency before another third-party comes in with a similar certification mechanism and tries to fill this gap. From a competitive perspective, SPSS and SAS already has a certification mechanism in place.
== Moving Parts ==
We understand that there are multiple moving pieces and we have identified 3 main areas to bucket them –
1. Specialization
2. Training
3. Testing
4. Certification
We have also taken a stab at the sub categories under those buckets and the multiple paths that we can have for those functional areas that would allow the R Community to solve for the above mentioned challenges.
[[File:Initial Path Visual.PNG|thumb|The likely path of certification to be taken in the initial cut]]
== Key decisions to be made ==
The WG will work on the following elements regarding R certification:
1. Financials –
* Cost
** Vendor Cost
** Consortium Cost
** Student Cost
* Profits
2. Marketing and Promotions
3. Testing
* Pass/Fail outcome
* Percentage – 0 to 100%
* Certification Levels
4. How long the certificate remains valid
5. Partnerships
* Training
* Testing
* Certifications
6. Curriculum –
* Generic
* Pharmacy
* Financial
* Visualizations
* Others
7. Phased Approach
* Identify the next couple of phases
* Phase 1
* Phase 2
== Members ==
* David Smith (Microsoft)
* Jeremy Reynolds (Microsoft)
* Jonathan Cornelissen (Datacamp)
* Martijn Theuwissen (DataCamp)
* Mark Sellors (Mango Solution)
* Aimee Gott (Mango Solutions)
* ''Hadley Wickham (ISC liason, RStudio)''
* Mine Cetinkaya-Rundel (RStudio)
* Dinesh Nirmal (IBM)
* Trishan de Lanerolle (Linux Foundation)
* '''MeharPratap Singh (ProCogia)'''
* JuanPablo Murillo (ProCogia)
== Milestones ==
=== Achieved ===
== Open Questions ==
== Minutes ==
==== 2/1 Working Group Meeting ====
Attendees: Mine Cetinkaya-Rundel, Trishan de Lanerolle, Mehar Singh, JuanPablo Murillo, Mark Sellors
* Identified tech representatives in partner companies to collaborate on domain objective for certification.
* Discussed potential focuses of certification and certification seeker’s profile.
* Agreed on reaching out to connections at Linux Foundation for their expertise in technical certification logistics and setup.
* Updated deadline of domain objective completion to 2/28 and deadline of opening domain objective to R community in mid March.
==== 2/13, 2/22, 3/1 Technical Representative Meetings - Domain Objective ====
Attendees: Richie Cotton, Aimee Gott, Garrett Grolemund, Mehar Singh, JuanPablo Murillo, Mine Cetinkaya-Rundel, Jeremy Reynolds, Nick Carchedi
* Developed a comprehensive content outline, which breaks R programming proficiency into several competency areas.
* Put together a list of packages/libraries to accompany content outline.
* Discussed and proposed a composite score in addition to a pass/fail final score. Composite score would take into account mastery within each competency area of the outline.
====3/17 Working Group Meeting====
Attendees: Richie Cotton, Aimee Gott, Mehar Singh, JuanPablo Murillo, Mine Cetinkaya-Rundel, Mark Sellors, Nick Carchedi, Clyde Seepersad, David Smith
* Updated larger group of recent progress with regard to certification topics and scoring approach.
* Shared completed certification content outline and package list internally with the Working Group.
* Discussed financial details of launching a performance based certification with an expert in the field.
706c65edd5539334e6db33264573ede76e22376a
69
68
2017-03-09T17:27:16Z
Jpmurillo
17
/* Members */
wikitext
text/x-wiki
== Background ==
We have seen an exponential increase in the demand of R among a large and variant set of audiences. People from various domains are keen to learn it and further improve their skills. This has created a supply and demand gap which is being filled by various teaching channels. While there is no dearth of the amount of R teaching material, both in-class and online, there is still arguably a shortage in skilled R users who possess quality skills in R. This shortage of qualified personnel and abundance of self-taught data scientists leads to confusion for employers as well as prospective employees who have the required skill-set but no way to differentiate themselves.
== Proposal ==
There is no system today to certify qualified R Professionals. The R-Consortium, as the governing body for the R community, needs to step in as the neutral agency before another third-party comes in with a similar certification mechanism and tries to fill this gap. From a competitive perspective, SPSS and SAS already has a certification mechanism in place.
== Moving Parts ==
We understand that there are multiple moving pieces and we have identified 3 main areas to bucket them –
1. Specialization
2. Training
3. Testing
4. Certification
We have also taken a stab at the sub categories under those buckets and the multiple paths that we can have for those functional areas that would allow the R Community to solve for the above mentioned challenges.
[[File:Initial Path Visual.PNG|thumb|The likely path of certification to be taken in the initial cut]]
== Key decisions to be made ==
The WG will work on the following elements regarding R certification:
1. Financials –
* Cost
** Vendor Cost
** Consortium Cost
** Student Cost
* Profits
2. Marketing and Promotions
3. Testing
* Pass/Fail outcome
* Percentage – 0 to 100%
* Certification Levels
4. How long the certificate remains valid
5. Partnerships
* Training
* Testing
* Certifications
6. Curriculum –
* Generic
* Pharmacy
* Financial
* Visualizations
* Others
7. Phased Approach
* Identify the next couple of phases
* Phase 1
* Phase 2
== Members ==
* David Smith (Microsoft)
* Jeremy Reynolds (Microsoft)
* Jonathan Cornelissen (Datacamp)
* Martijn Theuwissen (DataCamp)
* Mark Sellors (Mango Solution)
* Aimee Gott (Mango Solutions)
* ''Hadley Wickham (ISC liason, RStudio)''
* Mine Cetinkaya-Rundel (RStudio)
* Dinesh Nirmal (IBM)
* Trishan de Lanerolle (Linux Foundation)
* '''MeharPratap Singh (ProCogia)'''
* JuanPablo Murillo (ProCogia)
== Milestones ==
=== Achieved ===
== Open Questions ==
== Minutes ==
==== 2/1 Working Group Meeting ====
Attendees: Mine Cetinkaya-Rundel, Trishan de Lanerolle, Mehar Singh, JuanPablo Murillo, Mark Sellors
* Identified tech representatives in partner companies to collaborate on domain objective for certification.
* Discussed potential focuses of certification and certification seeker’s profile.
* Agreed on reaching out to connections at Linux Foundation for their expertise in technical certification logistics and setup.
* Updated deadline of domain objective completion to 2/28 and deadline of opening domain objective to R community in mid March.
==== 2/13, 2/22, 3/1 Technical Representative Meetings - Domain Objective ====
Attendees: Richie Cotton, Aimee Gott, Garrett Grolemund, Mehar Singh, JuanPablo Murillo, Mine Cetinkaya-Rundel, Jeremy Reynolds, Nick Carchedi
* Developed a comprehensive content outline, which breaks R programming proficiency into several competency areas.
* Put together a list of packages/libraries to accompany content outline.
* Discussed and proposed a composite score in addition to a pass/fail final score. Composite score would take into account mastery within each competency area of the outline.
2a92de1b15a60e2c3aaf593e0c0d35f2f7aba8ad
68
67
2017-03-09T17:26:40Z
Jpmurillo
17
/* Minutes */
wikitext
text/x-wiki
== Background ==
We have seen an exponential increase in the demand of R among a large and variant set of audiences. People from various domains are keen to learn it and further improve their skills. This has created a supply and demand gap which is being filled by various teaching channels. While there is no dearth of the amount of R teaching material, both in-class and online, there is still arguably a shortage in skilled R users who possess quality skills in R. This shortage of qualified personnel and abundance of self-taught data scientists leads to confusion for employers as well as prospective employees who have the required skill-set but no way to differentiate themselves.
== Proposal ==
There is no system today to certify qualified R Professionals. The R-Consortium, as the governing body for the R community, needs to step in as the neutral agency before another third-party comes in with a similar certification mechanism and tries to fill this gap. From a competitive perspective, SPSS and SAS already has a certification mechanism in place.
== Moving Parts ==
We understand that there are multiple moving pieces and we have identified 3 main areas to bucket them –
1. Specialization
2. Training
3. Testing
4. Certification
We have also taken a stab at the sub categories under those buckets and the multiple paths that we can have for those functional areas that would allow the R Community to solve for the above mentioned challenges.
[[File:Initial Path Visual.PNG|thumb|The likely path of certification to be taken in the initial cut]]
== Key decisions to be made ==
The WG will work on the following elements regarding R certification:
1. Financials –
* Cost
** Vendor Cost
** Consortium Cost
** Student Cost
* Profits
2. Marketing and Promotions
3. Testing
* Pass/Fail outcome
* Percentage – 0 to 100%
* Certification Levels
4. How long the certificate remains valid
5. Partnerships
* Training
* Testing
* Certifications
6. Curriculum –
* Generic
* Pharmacy
* Financial
* Visualizations
* Others
7. Phased Approach
* Identify the next couple of phases
* Phase 1
* Phase 2
== Members ==
* David Smith (Microsoft)
* Jeremy Reynolds (Microsoft)
* Jonathan Cornelissen (Datacamp)
* Martijn Theuwissen (DataCamp)
* Mark Sellors (Mango Solution)
* Aimee Gott (Mango Solutions)
* ''Hadley Wickham (ISC liason, RStudio)''
* Mine Cetinkaya-Rundel (RStudio)
* Dinesh Nirmal (IBM)
* Trishan de Lanerolle (Linux Foundation)
* '''MeharPratap Singh (ProCogia)'''
== Milestones ==
=== Achieved ===
== Open Questions ==
== Minutes ==
==== 2/1 Working Group Meeting ====
Attendees: Mine Cetinkaya-Rundel, Trishan de Lanerolle, Mehar Singh, JuanPablo Murillo, Mark Sellors
* Identified tech representatives in partner companies to collaborate on domain objective for certification.
* Discussed potential focuses of certification and certification seeker’s profile.
* Agreed on reaching out to connections at Linux Foundation for their expertise in technical certification logistics and setup.
* Updated deadline of domain objective completion to 2/28 and deadline of opening domain objective to R community in mid March.
==== 2/13, 2/22, 3/1 Technical Representative Meetings - Domain Objective ====
Attendees: Richie Cotton, Aimee Gott, Garrett Grolemund, Mehar Singh, JuanPablo Murillo, Mine Cetinkaya-Rundel, Jeremy Reynolds, Nick Carchedi
* Developed a comprehensive content outline, which breaks R programming proficiency into several competency areas.
* Put together a list of packages/libraries to accompany content outline.
* Discussed and proposed a composite score in addition to a pass/fail final score. Composite score would take into account mastery within each competency area of the outline.
14fafd0eac5f50436b032090f9e2084ab522c2e0
67
66
2017-03-09T17:19:54Z
Jpmurillo
17
/* Minutes */
wikitext
text/x-wiki
== Background ==
We have seen an exponential increase in the demand of R among a large and variant set of audiences. People from various domains are keen to learn it and further improve their skills. This has created a supply and demand gap which is being filled by various teaching channels. While there is no dearth of the amount of R teaching material, both in-class and online, there is still arguably a shortage in skilled R users who possess quality skills in R. This shortage of qualified personnel and abundance of self-taught data scientists leads to confusion for employers as well as prospective employees who have the required skill-set but no way to differentiate themselves.
== Proposal ==
There is no system today to certify qualified R Professionals. The R-Consortium, as the governing body for the R community, needs to step in as the neutral agency before another third-party comes in with a similar certification mechanism and tries to fill this gap. From a competitive perspective, SPSS and SAS already has a certification mechanism in place.
== Moving Parts ==
We understand that there are multiple moving pieces and we have identified 3 main areas to bucket them –
1. Specialization
2. Training
3. Testing
4. Certification
We have also taken a stab at the sub categories under those buckets and the multiple paths that we can have for those functional areas that would allow the R Community to solve for the above mentioned challenges.
[[File:Initial Path Visual.PNG|thumb|The likely path of certification to be taken in the initial cut]]
== Key decisions to be made ==
The WG will work on the following elements regarding R certification:
1. Financials –
* Cost
** Vendor Cost
** Consortium Cost
** Student Cost
* Profits
2. Marketing and Promotions
3. Testing
* Pass/Fail outcome
* Percentage – 0 to 100%
* Certification Levels
4. How long the certificate remains valid
5. Partnerships
* Training
* Testing
* Certifications
6. Curriculum –
* Generic
* Pharmacy
* Financial
* Visualizations
* Others
7. Phased Approach
* Identify the next couple of phases
* Phase 1
* Phase 2
== Members ==
* David Smith (Microsoft)
* Jeremy Reynolds (Microsoft)
* Jonathan Cornelissen (Datacamp)
* Martijn Theuwissen (DataCamp)
* Mark Sellors (Mango Solution)
* Aimee Gott (Mango Solutions)
* ''Hadley Wickham (ISC liason, RStudio)''
* Mine Cetinkaya-Rundel (RStudio)
* Dinesh Nirmal (IBM)
* Trishan de Lanerolle (Linux Foundation)
* '''MeharPratap Singh (ProCogia)'''
== Milestones ==
=== Achieved ===
== Open Questions ==
== Minutes ==
2/1 Working Group Meeting:
Attendees: Mine Cetinkaya-Rundel, Trishan de Lanerolle, Mehar Singh, JuanPablo Murillo, Mark Sellors
Identified tech representatives in partner companies to collaborate on domain objective for certification.
Discussed potential focuses of certification and certification seeker’s profile.
Agreed on reaching out to connections at Linux Foundation for their expertise in technical certification logistics and setup.
Updated deadline of domain objective completion to 2/28 and deadline of opening domain objective to R community in mid March.
2/13, 2/22, 3/1 Technical Representative Meetings - Domain Objective
Attendees: Richie Cotton, Aimee Gott, Garrett Grolemund, Mehar Singh, JuanPablo Murillo, Mine Cetinkaya-Rundel, Jeremy Reynolds, Nick Carchedi
Developed a comprehensive content outline, which breaks R programming proficiency into several competency areas.
Put together a list of packages/libraries to accompany content outline.
Discussed and proposed a composite score in addition to a pass/fail final score. Composite score would take into account mastery within each competency area of the outline.
582fc97f75376443dbeb1fd58d0295d9e1e61817
66
65
2017-01-21T16:08:47Z
MeharPratapSingh
15
/* Members */
wikitext
text/x-wiki
== Background ==
We have seen an exponential increase in the demand of R among a large and variant set of audiences. People from various domains are keen to learn it and further improve their skills. This has created a supply and demand gap which is being filled by various teaching channels. While there is no dearth of the amount of R teaching material, both in-class and online, there is still arguably a shortage in skilled R users who possess quality skills in R. This shortage of qualified personnel and abundance of self-taught data scientists leads to confusion for employers as well as prospective employees who have the required skill-set but no way to differentiate themselves.
== Proposal ==
There is no system today to certify qualified R Professionals. The R-Consortium, as the governing body for the R community, needs to step in as the neutral agency before another third-party comes in with a similar certification mechanism and tries to fill this gap. From a competitive perspective, SPSS and SAS already has a certification mechanism in place.
== Moving Parts ==
We understand that there are multiple moving pieces and we have identified 3 main areas to bucket them –
1. Specialization
2. Training
3. Testing
4. Certification
We have also taken a stab at the sub categories under those buckets and the multiple paths that we can have for those functional areas that would allow the R Community to solve for the above mentioned challenges.
[[File:Initial Path Visual.PNG|thumb|The likely path of certification to be taken in the initial cut]]
== Key decisions to be made ==
The WG will work on the following elements regarding R certification:
1. Financials –
* Cost
** Vendor Cost
** Consortium Cost
** Student Cost
* Profits
2. Marketing and Promotions
3. Testing
* Pass/Fail outcome
* Percentage – 0 to 100%
* Certification Levels
4. How long the certificate remains valid
5. Partnerships
* Training
* Testing
* Certifications
6. Curriculum –
* Generic
* Pharmacy
* Financial
* Visualizations
* Others
7. Phased Approach
* Identify the next couple of phases
* Phase 1
* Phase 2
== Members ==
* David Smith (Microsoft)
* Jeremy Reynolds (Microsoft)
* Jonathan Cornelissen (Datacamp)
* Martijn Theuwissen (DataCamp)
* Mark Sellors (Mango Solution)
* Aimee Gott (Mango Solutions)
* ''Hadley Wickham (ISC liason, RStudio)''
* Mine Cetinkaya-Rundel (RStudio)
* Dinesh Nirmal (IBM)
* Trishan de Lanerolle (Linux Foundation)
* '''MeharPratap Singh (ProCogia)'''
== Milestones ==
=== Achieved ===
== Open Questions ==
== Minutes ==
6b334b08102d5253843fb98967c9a48322de785a
65
64
2017-01-21T16:03:11Z
MeharPratapSingh
15
/* Moving Parts */
wikitext
text/x-wiki
== Background ==
We have seen an exponential increase in the demand of R among a large and variant set of audiences. People from various domains are keen to learn it and further improve their skills. This has created a supply and demand gap which is being filled by various teaching channels. While there is no dearth of the amount of R teaching material, both in-class and online, there is still arguably a shortage in skilled R users who possess quality skills in R. This shortage of qualified personnel and abundance of self-taught data scientists leads to confusion for employers as well as prospective employees who have the required skill-set but no way to differentiate themselves.
== Proposal ==
There is no system today to certify qualified R Professionals. The R-Consortium, as the governing body for the R community, needs to step in as the neutral agency before another third-party comes in with a similar certification mechanism and tries to fill this gap. From a competitive perspective, SPSS and SAS already has a certification mechanism in place.
== Moving Parts ==
We understand that there are multiple moving pieces and we have identified 3 main areas to bucket them –
1. Specialization
2. Training
3. Testing
4. Certification
We have also taken a stab at the sub categories under those buckets and the multiple paths that we can have for those functional areas that would allow the R Community to solve for the above mentioned challenges.
[[File:Initial Path Visual.PNG|thumb|The likely path of certification to be taken in the initial cut]]
== Key decisions to be made ==
The WG will work on the following elements regarding R certification:
1. Financials –
* Cost
** Vendor Cost
** Consortium Cost
** Student Cost
* Profits
2. Marketing and Promotions
3. Testing
* Pass/Fail outcome
* Percentage – 0 to 100%
* Certification Levels
4. How long the certificate remains valid
5. Partnerships
* Training
* Testing
* Certifications
6. Curriculum –
* Generic
* Pharmacy
* Financial
* Visualizations
* Others
7. Phased Approach
* Identify the next couple of phases
* Phase 1
* Phase 2
== Members ==
* David Smith (Microsoft)
* Jeremy Reynolds (Microsoft)
* Jonathan Cornelissen (Datacamp)
* Martijn Theuwissen (DataCamp)
* Mark Sellors (Mango Solution)
* Aimee Gott (Mango Solutions)
* ''Hadley Wickham (ISC liason, RStudio)''
* Mine Cetinkaya-Rundel (RStudio)
* Dinesh Nirmal (IBM)
* '''MeharPratap Singh (ProCogia)'''
== Milestones ==
=== Achieved ===
== Open Questions ==
== Minutes ==
9ee17819fe835124061f6225b39f218465da4bc0
64
63
2017-01-21T16:02:41Z
MeharPratapSingh
15
/* Members */
wikitext
text/x-wiki
== Background ==
We have seen an exponential increase in the demand of R among a large and variant set of audiences. People from various domains are keen to learn it and further improve their skills. This has created a supply and demand gap which is being filled by various teaching channels. While there is no dearth of the amount of R teaching material, both in-class and online, there is still arguably a shortage in skilled R users who possess quality skills in R. This shortage of qualified personnel and abundance of self-taught data scientists leads to confusion for employers as well as prospective employees who have the required skill-set but no way to differentiate themselves.
== Proposal ==
There is no system today to certify qualified R Professionals. The R-Consortium, as the governing body for the R community, needs to step in as the neutral agency before another third-party comes in with a similar certification mechanism and tries to fill this gap. From a competitive perspective, SPSS and SAS already has a certification mechanism in place.
== Moving Parts ==
We understand that there are multiple moving pieces and we have identified 3 main areas to bucket them –
1. Specialization
2. Training
3. Testing
4. Certification
We have also taken a stab at the sub categories under those buckets and the multiple paths that we can have for those functional areas that would allow the R Community to solve for the above mentioned challenges.
[[File:Initial Path Visual.PNG|thumb|The likely path of certification to be taken in the initial cut]]
== Key decisions to be made ==
The WG will work on the following elements regarding R certification:
1. Financials –
* Cost
** Vendor Cost
** Consortium Cost
** Student Cost
* Profits
2. Marketing and Promotions
3. Testing
* Pass/Fail outcome
* Percentage – 0 to 100%
* Certification Levels
4. How long the certificate remains valid
5. Partnerships
* Training
* Testing
* Certifications
6. Curriculum –
* Generic
* Pharmacy
* Financial
* Visualizations
* Others
7. Phased Approach
* Identify the next couple of phases
* Phase 1
* Phase 2
== Members ==
* David Smith (Microsoft)
* Jeremy Reynolds (Microsoft)
* Jonathan Cornelissen (Datacamp)
* Martijn Theuwissen (DataCamp)
* Mark Sellors (Mango Solution)
* Aimee Gott (Mango Solutions)
* ''Hadley Wickham (ISC liason, RStudio)''
* Mine Cetinkaya-Rundel (RStudio)
* Dinesh Nirmal (IBM)
* '''MeharPratap Singh (ProCogia)'''
== Milestones ==
=== Achieved ===
== Open Questions ==
== Minutes ==
e30dc668cf9a97afd881966afc8ad1f2caeccd8f
63
62
2017-01-21T16:01:28Z
MeharPratapSingh
15
/* Key decisions to be made */
wikitext
text/x-wiki
== Background ==
We have seen an exponential increase in the demand of R among a large and variant set of audiences. People from various domains are keen to learn it and further improve their skills. This has created a supply and demand gap which is being filled by various teaching channels. While there is no dearth of the amount of R teaching material, both in-class and online, there is still arguably a shortage in skilled R users who possess quality skills in R. This shortage of qualified personnel and abundance of self-taught data scientists leads to confusion for employers as well as prospective employees who have the required skill-set but no way to differentiate themselves.
== Proposal ==
There is no system today to certify qualified R Professionals. The R-Consortium, as the governing body for the R community, needs to step in as the neutral agency before another third-party comes in with a similar certification mechanism and tries to fill this gap. From a competitive perspective, SPSS and SAS already has a certification mechanism in place.
== Moving Parts ==
We understand that there are multiple moving pieces and we have identified 3 main areas to bucket them –
1. Specialization
2. Training
3. Testing
4. Certification
We have also taken a stab at the sub categories under those buckets and the multiple paths that we can have for those functional areas that would allow the R Community to solve for the above mentioned challenges.
[[File:Initial Path Visual.PNG|thumb|The likely path of certification to be taken in the initial cut]]
== Key decisions to be made ==
The WG will work on the following elements regarding R certification:
1. Financials –
* Cost
** Vendor Cost
** Consortium Cost
** Student Cost
* Profits
2. Marketing and Promotions
3. Testing
* Pass/Fail outcome
* Percentage – 0 to 100%
* Certification Levels
4. How long the certificate remains valid
5. Partnerships
* Training
* Testing
* Certifications
6. Curriculum –
* Generic
* Pharmacy
* Financial
* Visualizations
* Others
7. Phased Approach
* Identify the next couple of phases
* Phase 1
* Phase 2
== Members ==
* David Smith (Microsoft)
* Jeremy Reynolds (Microsoft)
* Jonathan Cornelissen (Datacamp)
* Martijn Theuwissen (DataCamp)
* Mark Sellors (Mango Solution)
* Aimee Gott (Mango Solutions)
* Hadley Wickham (ISC liason, RStudio)
* Mine Cetinkaya-Rundel (RStudio)
* Dinesh Nirmal (IBM)
* MeharPratap Singh (ProCogia)
== Milestones ==
=== Achieved ===
== Open Questions ==
== Minutes ==
d9e3698ee6cc4eac112c67c1cb7620a727f7de84
62
59
2017-01-21T15:27:38Z
MeharPratapSingh
15
wikitext
text/x-wiki
== Background ==
We have seen an exponential increase in the demand of R among a large and variant set of audiences. People from various domains are keen to learn it and further improve their skills. This has created a supply and demand gap which is being filled by various teaching channels. While there is no dearth of the amount of R teaching material, both in-class and online, there is still arguably a shortage in skilled R users who possess quality skills in R. This shortage of qualified personnel and abundance of self-taught data scientists leads to confusion for employers as well as prospective employees who have the required skill-set but no way to differentiate themselves.
== Proposal ==
There is no system today to certify qualified R Professionals. The R-Consortium, as the governing body for the R community, needs to step in as the neutral agency before another third-party comes in with a similar certification mechanism and tries to fill this gap. From a competitive perspective, SPSS and SAS already has a certification mechanism in place.
== Moving Parts ==
We understand that there are multiple moving pieces and we have identified 3 main areas to bucket them –
1. Specialization
2. Training
3. Testing
4. Certification
We have also taken a stab at the sub categories under those buckets and the multiple paths that we can have for those functional areas that would allow the R Community to solve for the above mentioned challenges.
[[File:Initial Path Visual.PNG|thumb|The likely path of certification to be taken in the initial cut]]
== Key decisions to be made ==
The WG will work on the following elements regarding R certification:
1. Financials –
o Cost
Vendor Cost
Consortium Cost
Student Cost
o Profits
2. Marketing and Promotions
3. Testing
o Pass/Fail outcome
o Percentage – 0 to 100%
o Certification Levels
4. How long the certificate remains valid
5. Partnerships
o Training
o Testing
o Certifications
6. Curriculum –
o Generic
o Pharmacy
o Financial
o Visualizations
o Etc.
7. Phased Approach
o Identify the next couple of phases
o Phase 1
o Phase 2
o Etc.
== Members ==
* David Smith (Microsoft)
* Jeremy Reynolds (Microsoft)
* Jonathan Cornelissen (Datacamp)
* Martijn Theuwissen (DataCamp)
* Mark Sellors (Mango Solution)
* Aimee Gott (Mango Solutions)
* Hadley Wickham (ISC liason, RStudio)
* Mine Cetinkaya-Rundel (RStudio)
* Dinesh Nirmal (IBM)
* MeharPratap Singh (ProCogia)
== Milestones ==
=== Achieved ===
== Open Questions ==
== Minutes ==
cf945c3c071a289c048a2c94acb28cb6a4189a46
59
2017-01-21T15:12:42Z
MeharPratapSingh
15
Created page with "== Background == We have seen an exponential increase in the demand of R among a large and variant set of audiences. People from various domains are keen to learn it and furt..."
wikitext
text/x-wiki
== Background ==
We have seen an exponential increase in the demand of R among a large and variant set of audiences. People from various domains are keen to learn it and further improve their skills. This has created a supply and demand gap which is being filled by various teaching channels. While there is no dearth of the amount of R teaching material, both in-class and online, there is still arguably a shortage in skilled R users who possess quality skills in R. This shortage of qualified personnel and abundance of self-taught data scientists leads to confusion for employers as well as prospective employees who have the required skill-set but no way to differentiate themselves.
== Proposal ==
There is no system today to certify qualified R Professionals. The R-Consortium, as the governing body for the R community, needs to step in as the neutral agency before another third-party comes in with a similar certification mechanism and tries to fill this gap. From a competitive perspective, SPSS and SAS already has a certification mechanism in place.
== Moving Parts ==
We understand that there are multiple moving pieces and we have identified 3 main areas to bucket them –
1. Specialization
2. Training
3. Testing
4. Certification
We have also taken a stab at the sub categories under those buckets and the multiple paths that we can have for those functional areas that would allow the R Community to solve for the above mentioned challenges.
[[File:Example.jpg]]
== Members ==
* '''Michael Lawrence''' (Genentech)
* '''Indrajit Roy''' (HP Enterprise)
* ''Joe Rickert'' (ISC liason, RStudio)
* Bernd Bischl (LMU)
* Matt Dowle (H2O)
* Mario Inchiosa (Microsoft)
* Michael Kane (Yale)
* Javier Luraschi (RStudio)
* Edward Ma (HP Enterprise)
* Luke Tierney (University of Iowa)
* Simon Urbanek (AT&T)
* Bryan Lewis (Paradigm4)
* Hossein Falaki (databricks)
== Milestones ==
=== Achieved ===
* Adopt ddR as a prototype for a standard API for distributed computing in R
=== 2016 Internship ===
Clark Fitzgerald, a PhD student in the UC Davis Statistics department, worked on ddR and Spark integration.
* Wrote [https://github.com/clarkfitzg/sparklite sparklite] and [https://github.com/clarkfitzg/rddlist rddlist] as minimal proof-of-concept R packages to connect and store general data on Spark. [https://docs.google.com/presentation/d/1WfUQ2ockNku90GWMXonEhUEcVOWcgBmWwt5uYSSBYPY/edit?usp=sharing slides]
* [https://issues.apache.org/jira/browse/SPARK-16785 Patched SparkR] to allow user defined functions returning binary columns. This allows implementation of different data structures in SparkR.
* Updated [https://github.com/vertica/ddR/wiki/Design design documents] with suggested changes to DDR's internal design and object oriented model.
* Improved [https://github.com/vertica/ddR/pull/15 testing and ddR internals].
=== Outstanding ===
* Agree on a final standard API for distributed computing in R
* Implement at least one scalable backend based on an open-source technology like Spark, SQL, etc
== Open Questions ==
* How can we address the needs of both the end user data scientists and the algorithm implementers?
* How should we share data between R and a system like Spark?
* Is there any way to unify SparkR and sparklyr?
* Could we use the abstractions of tensorflow to partially or fully integrate with platforms like Spark?
== Minutes ==
=== 12/08/2016 ===
* Yuan Tang from Uptake was the presenter
** Michael and Indrajit will write a status report for the working group sometime in December or January
** Yuan gave an overview of TensorFlow
** JJ, Dirk and Yuan are working on R layer for TensorFlow
** TensorFlow is a platform for machine learning as well as other computations (even math proofs).
** It is GPU optimized and distributed.
** It is used in search, speech recognition, Google photos, etc.
** TensorFlow computations are directed graphs. odes are operations and edges are tensors.
** A lot of array, matrix, etc. operations are available
** Backend is mostly C++. Python front end exists.
** TensorFlow R is based on the python fronted
** In multi-device setting, TensorFlow figures out which devices to use and manages communication between devices.
** Computations are fault tolerant
** Yuan has previously worked on Scikit Flow which is now TF.Learn. It’s a easy transition for Scikit learn users.
** Yuan gave a brief overview of the python interface
** TensorFlow in R handles conversion between R and Python. Syntax is very similar to python API
** Future work: Adding more examples and tutorials, integration with Kubernetes/Marathon like framework.
** During the Q/A there were questions related to whether R kernels can be supported in TensorFlow, and whether R dataframes are a natural wrapper for TensorFlow objects.
=== 11/10/2016 ===
* SparkR slides were presented by Hossein Faliki and Shivaram from Databricks and UC Berkeley:
** SparkR was a prototype from AMPLab (2014). Initially it had the RDD API and was similar to PySpark API
** In 2015, the merge with upstream Spark, the decision was made to integrate with the Dataframe API, and hide the RDD API
** In 2016 more MLLib algorithms have been integrated and new APIs have been added. A CRAN package will be released soon
** Original SparkR architecture runs R on the master that communicates with the JVM processes in the driver. the driver sends commands to the worker JVM processes, and executes them as scala/java statements.
** The system can read distributed data inside the JVM from different sources such as S3, HDFS, etc.
** The driver has a socket based connection between SparkR and the RBackend. RBackend runs on the JVM, deserializes the R code, and converts the R statements into Java calls.
** collect() and createDataFrame() are used to move data between R and JVM processes. createDataFrame will convert your local R data into a JVM based distributed data frame.
** The API has IO, Caching, MLLib, and SQL related commands
** Since Spark 2.0, we can run R processes inside the JVM worker processes. There is no need to keep long running R processes.
** There are 3 UDF functions (1) lapply, runs function on different value of a list (2) dapply, runs function on each partition of a data frame. You have to careful about how data is partitioned, and (3) gapply, performs a grouping on different column names and then runs the function on each group.
** The new CRAN package install.spark() will automatically download and install Spark. Automated CRAN checks have been added to every commit to the code. Should be available with Spark 2.1.0
* Q/A
** Currently trying to get zero copy dataframe between python and Spark. Spark 2.0 has an off heap manager that uses Arrow. Once this feature is tested on the Python API, the next step will be integration R.
** Spark dataframes gain from plan optimizations. It is not SparkR specific. R UDFs are still treated as black boxes by the optimizer
** Spark doesn't directly support matrixes. There is no immediate intent to do so either. One can store an array or vector as a single column of a Spark dataframe.
=== 10/13/2016 ===
''Detailed minutes were not taken for this meeting''
* Mario Inchiosa: Microsoft's perspective on distributed computing with R
** Microsoft R Server: abstractions and algorithms for distributed computation on top of open-source R
** Desired features of a distributed API like ddR:
*** Supports PEMA (initialize, processData, updateResults, processResults)
*** Cross-platform
*** Fast runtime
*** Supports algorithm writer and data scientist
*** Comes with a comprehensive set of algorithms
*** Easy deployment
** ddR is making good progress but does not yet meet those requirements
* Indrajit: ddR progress report and next steps
** Recap of Clark's internship
** Next step: implement some of Clark's design suggestions: https://github.com/vertica/ddR/wiki/Design
** Spark integration will be based on sparklyr
** Should we limit Spark interaction to the DataFrame API or directly interact with RDDs?
*** Consensus: will likely need flexibility of RDDs to implement everything we need, e.g., arrays and lists
** Clark and Javier raised concerns about the scalability of sharing data between R and Spark
*** Michael: Spark is a platform in its own right, so interoperability is important, should figure something out
*** Bryan Lewis: Why not use tensor abstraction from tensorflow? Spark supports tensorflow and an R interface is already in the works.
** Michael raised the issue of additional funding from the R Consortium to continue Clark's work
*** Joe Rickert suggested that the working group develop one or more white papers summarizing the findings of the working group for presentation to the Infrastructure Steering Committee.
*** Consensus was in favor of this, and several pointed out that the progress so far has been worthwhile, despite not meeting the specific goals laid out in the proposal.
* Michael: do we want to invite some external speakers, one per meeting, from groups like databricks, tensorflow, etc?
** Consensus was in favor.
=== 9/8/2016 ===
''Detailed minutes were not taken for this meeting''
* Clark Fitzgerald: internship report
** Developed two packages for low-level Spark integration: rddlist, sparklite
** Patched a bug in Spark
** ddR needs refactoring before Spark integration is feasible:
*** dlist, dframe, and darray should be formal classes.
*** Partitions of data should be represented by a distributed list abstraction, and most functions (e.g., dmapply) should be implemented on top of that list.
* Javier: sparklyr update
** Preparing for CRAN release
** Mario: what happened to sparkapi?
*** Javier: sparkapi has been merged into sparklyr in order to avoid overhead of maintaining two packages. ddR can do everything it needs with sparklyr.
* Luke Tierney: Update on the low-level vector abstraction, which might support interfaces like ddR and sparklyr.
** Overall approach seems feasible, but still working out a few details.
** Will land in a branch soon.
* Bernd Bischl: update on the batchtools package
** Successor to BatchJobs based on in-memory database
=== 8/11/2016 ===
''Meeting was canceled due to lack of availability.''
=== 7/14/2016 ===
* Introduced Clark who is the intern funded by R Consortium. Clark is a graduate student from UC Davis. He will work on ddR integration with Spark and improving the core ddR API as well such as adding a distributed apply() for matrices, split function, etc.
* Bernd: Can I play around with ddR now? What backend should I use? How robust is the code?
** Clark: It's in good enough shape to be played around with. We will continue to improve it. Hopefully the spark integration will be done before the end of my internship in September.
* Q: Is anyone working on using ddR to make ML scale better.
** Indrajit: We have kmeans, glm, etc. already in CRAN.
** Michael Kane: We are working on glmnet and other packages related to algorithm development.
* Javier gave a demo of sparklyr and sparkapi.
** Motivation for the pacakage: The SparkR package overrides the dplyr interface. This is an issue for RStudio. SparkR is not a CRAN package which makes it difficult to add changes. dplyr is the most popular tool by RStudio and is broken on SparkR.
** Sparklyr provides a dplyr interface. It will also support ML like interfaces, such as consuming a ML model.
** Sparklyr does not currently support any distributed computing features. Instead we can recommend ddR as the distributed computing framework on top of sparkapi. We will put the code in CRAN in a couple of weeks.
** Simon: Can you talk more about the wrapper/low level API to work with Spark?
*** Javier: The package underneath the cover is called "sparkapi" it is to be used by pacakge builders. "spark_context()" and "invoke()" are the functionality to call scala methods. It does not you to currently run R user defined functions. I am currently working on enabling that feature. Depending upon the interest in using ddR with sparkapi, I can spend more time to make sparkapi feature rich.
** Indrajit: What versions of Spark are supported
*** Javier: Anything after 1.6
** Bernd: How do you export data?
*** Javier: We are using all the code from SparkR. So everything in SparkR should continue to work. We don't need to change SparkR. We just need to maintain compatibility.
** Bernd: What happens when the RDDs are very large?
*** Javier: Spark will spill on disk.
* Michael Kane: Presented examples that he implemented on ddR.
** Talked about how the different distributed packages compare to each other in terms of functionality.
** Michael K. looked at glm and truncated SVD on ddR. Was able to implement irls on ddR by implementing two distributed functions such as "cross". In truncated SVD only needed to overload two different distributed multiplications.
** Ran these algorithms on the 1000 genome dataset.
** Overall liked ddR since it was easy to implement the algorithms in the package.
** New ideas:
*** Trying to separate the data layer from the execution layer
*** Create an API that works on "chunks" (which is similar to the "parts" API in ddR). Would like to add these APIs to ddR.
*** Indrajit: You should be able to get some of the chunk like features by using parts and dmapply. E.g., you can call dmapply to read 10 different files, which correspond 10 chunks now. These are however wrapped as a darray or dframe. But you can continue to work on the individual chunks by using parts(i).
=== 6/2/2016 ===
* Round table introduction
* (Michael) Goals for the group:
** Make a common abstraction/interfaces to make it easier to work with distributed data and R
** Unify the interface
** Working group will run for a year. Get an API defined, get at least one open source reference implementations
** not everyone needs to work hands on. We will create smaller groups to focus on those aspects.
** We tried to get a diverse group of participants
* Logistics: meet monthly, focus groups may meet more often
* R Consoritum may be able to figure ways to fund smaller projects that come out of the working group
* Michael Kane: Should we start with an inventory of what is available and people are using?
** Michael Lawrence: Yes, we should find the collection of tools as well as the use cases that are common.
** Joe: I will figure out a wiki space.
* Javier: Who are the end users? Simon: Common layer needed to get algorithms working. We started from algos and tried to find the minimal common api. One of the goals is to make sure everyone is on the same page and not trying to create his/her own custom interface.
* Javier: Should we try to get people with more algo expertise?
* Joe: Simon do you have a stack diagram?
* Simon: Can we get R Consortium to help write things up and draw things?
* Next meeting: Javier is going to present SparkR next time.
499a199feeaa7ec74a4a9d6a653dcb19c63b9c5a
R Consortium and the R Community Code of Conduct
0
15
40
2016-08-19T16:48:50Z
Trishan
3
Created page with "== R Consortium and the R Community Code of Conduct == The R Consortium, like the R community as a whole, is made up of members from around the globe with a diverse set of sk..."
wikitext
text/x-wiki
== R Consortium and the R Community Code of Conduct ==
The R Consortium, like the R community as a whole, is made up of members from around the globe with a diverse set of skills, personalities, and experiences. It is through these differences that our community experiences great successes and continued growth.
Members of the R Consortium and their representatives are bound to follow this R Community Code of Conduct (which is based on the Python Community Code of Conduct). We encourage all members of the R community to likewise follow these guidelines which help steer our interactions and strive to keep '''R''' a positive, successful, and growing community.
== R Community Code of Conduct==
A member of the R Community is:
'''Open:''' Members of the community are open to collaboration, whether it's on projects, working groups, packages, problems, or otherwise. We're receptive to constructive comment and criticism, as the experiences and skill sets of other members contribute to the whole of our efforts. We're accepting of anyone who wishes to take part in our activities, fostering an environment where all can participate and everyone can make a difference.
'''Considerate:''' Members of the community are considerate of their peers — other R users. We're thoughtful when addressing the efforts of others, keeping in mind that oftentimes the labor was completed simply for the good of the community. We're attentive in our communications, whether in person or online, and we're tactful when approaching differing views.
'''Respectful:''' Members of the community are respectful. We're respectful of others, their positions, their skills, their commitments, and their efforts. We're respectful of the volunteer efforts that permeate the R community. We're respectful of the processes set forth in the community, and we work within them. When we disagree, we are courteous in raising our issues.
Overall, we're good to each other. We contribute to this community not because we have to, but because we want to. If we remember that, these guidelines will come naturally.
'''Questions/comments/reports?''' Please write to the Code of Conduct address:
''conduct@r-consortium.org''. (this will email the Board Chair and R Consortium Program manager). Include any available relevant information, including links to any publicly accessible material relating to the matter.
b33148b94e10e5b594e5766d20cdc678c129ebfd
R Native API
0
6
35
33
2016-07-03T01:35:56Z
Lukasstadler
8
wikitext
text/x-wiki
= Working Group: Future-proof Native APIs for R =
This working groups will assess current native API usage, gather community input, and work towards an easy-to-understand, consistent and verifiable API that will drive R language adoption.
=== 2016-06-30 Meeting UseR! ===
see [[R Native API meeting 2016-06-30]]
=== 2016-06-20 Teleconference ===
see [[R Native API call 2016-06-20]]
=== 2016-06-20 Survey of API usage ===
see [[Initial Survey of API Usage]]
=== 2016-06-13 Initial WG members ===
* Alexander Bertram, BeDataDriven
* Torsten Hothorn, University of Zurich
* Mick Jordan, Oracle Labs
* Stephen Kaluzny, TIBCO (ISC representative)
* Michael Lawrence, Genentech
* Karl Millar, Google
* Duncan Murdoch, University of Western Ontario
* Radford Neal, University of Toronto
* Edzer Pebesma, University of Münster
* Indrajit Roy, HP Labs
* Michael Sannella, TIBCO
* Lukas Stadler, Oracle Labs
* Luke Tierney, University of Iowa
* Simon Urbanek, AT&T Research Labs
* Jan Vitek, Northeastern University
* Gregory Warnes, Boehringer Ingelheim
=== 2016-05-10 Initial phone meeting ===
Attendence:
* Stephen Kaluzny (WG sponsor in ISC)
* Lukas Stadler, Adam Welc, Mark Hornick (authors of the ISC project proposal)
Topics:
* Defining a WG leader (Lukas Stadler)
* Channels on which do distribute call for participation:
:* r-devel mailing list
:* Authors of important packages (top packages in download stats) with native components
:* Alterative and modified implementations: Tibco TERR, renjin, pqr, cxxr/rho, FastR
:* Program committee of the RIOT workshop [[http://riotworkshop.github.io]]
:* Lukas Stadler and Mick Jordan as initial members from Oracle Labs
* Discussion about stopping criteria for the WG / "What is the job of this WG?"
* Plan for meetings and discussions, general openness of all WG communication
97ad5e3c46d0b43d63993e5fbcfb381f2954b2cf
33
23
2016-06-25T18:34:37Z
Lukasstadler
8
wikitext
text/x-wiki
= Working Group: Future-proof Native APIs for R =
This working groups will assess current native API usage, gather community input, and work towards an easy-to-understand, consistent and verifiable API that will drive R language adoption.
=== 2016-06-20 Teleconference ===
see [[R Native API call 2016-06-20]]
=== 2016-06-20 Survey of API usage ===
see [[Initial Survey of API Usage]]
=== 2016-06-13 Initial WG members ===
* Alexander Bertram, BeDataDriven
* Torsten Hothorn, University of Zurich
* Mick Jordan, Oracle Labs
* Stephen Kaluzny, TIBCO (ISC representative)
* Michael Lawrence, Genentech
* Karl Millar, Google
* Duncan Murdoch, University of Western Ontario
* Radford Neal, University of Toronto
* Edzer Pebesma, University of Münster
* Indrajit Roy, HP Labs
* Michael Sannella, TIBCO
* Lukas Stadler, Oracle Labs
* Luke Tierney, University of Iowa
* Simon Urbanek, AT&T Research Labs
* Jan Vitek, Northeastern University
* Gregory Warnes, Boehringer Ingelheim
=== 2016-05-10 Initial phone meeting ===
Attendence:
* Stephen Kaluzny (WG sponsor in ISC)
* Lukas Stadler, Adam Welc, Mark Hornick (authors of the ISC project proposal)
Topics:
* Defining a WG leader (Lukas Stadler)
* Channels on which do distribute call for participation:
:* r-devel mailing list
:* Authors of important packages (top packages in download stats) with native components
:* Alterative and modified implementations: Tibco TERR, renjin, pqr, cxxr/rho, FastR
:* Program committee of the RIOT workshop [[http://riotworkshop.github.io]]
:* Lukas Stadler and Mick Jordan as initial members from Oracle Labs
* Discussion about stopping criteria for the WG / "What is the job of this WG?"
* Plan for meetings and discussions, general openness of all WG communication
70da52e45dcdb50abfb312fdc1d4f5e49cd6a113
23
21
2016-06-23T15:22:22Z
Skaluzny
7
/* 2016-06-13 Initial WG members */
wikitext
text/x-wiki
= Working Group: Future-proof Native APIs for R =
=== 2016-06-20 Survey of API usage ===
see [[Initial Survey of API Usage]]
=== 2016-06-13 Initial WG members ===
* Alexander Bertram, BeDataDriven
* Torsten Hothorn, University of Zurich
* Mick Jordan, Oracle Labs
* Stephen Kaluzny, TIBCO (ISC representative)
* Michael Lawrence, Genentech
* Karl Millar, Google
* Duncan Murdoch, University of Western Ontario
* Radford Neal, University of Toronto
* Edzer Pebesma, University of Münster
* Indrajit Roy, HP Labs
* Michael Sannella, TIBCO
* Lukas Stadler, Oracle Labs
* Luke Tierney, University of Iowa
* Simon Urbanek, AT&T Research Labs
* Jan Vitek, Northeastern University
* Gregory Warnes, Boehringer Ingelheim
=== 2016-05-10 Initial phone meeting ===
Attendence:
* Stephen Kaluzny (WG sponsor in ISC)
* Lukas Stadler, Adam Welc, Mark Hornick (authors of the ISC project proposal)
Topics:
* Defining a WG leader (Lukas Stadler)
* Channels on which do distribute call for participation:
:* r-devel mailing list
:* Authors of important packages (top packages in download stats) with native components
:* Alterative and modified implementations: Tibco TERR, renjin, pqr, cxxr/rho, FastR
:* Program committee of the RIOT workshop [[http://riotworkshop.github.io]]
:* Lukas Stadler and Mick Jordan as initial members from Oracle Labs
* Discussion about stopping criteria for the WG / "What is the job of this WG?"
* Plan for meetings and discussions, general openness of all WG communication
74bcda4359ece2d9ddacd4c2bd048e90b2402cc3
21
20
2016-06-20T15:22:24Z
Lukasstadler
8
reversed order
wikitext
text/x-wiki
= Working Group: Future-proof Native APIs for R =
=== 2016-06-20 Survey of API usage ===
see [[Initial Survey of API Usage]]
=== 2016-06-13 Initial WG members ===
* Alexander Bertram, BeDataDriven
* Torsten Hothorn, University of Zurich
* Mick Jordan, Oracle Labs
* Michael Lawrence, Genentech
* Karl Millar, Google
* Duncan Murdoch, University of Western Ontario
* Radford Neal, University of Toronto
* Edzer Pebesma, University of Münster
* Indrajit Roy, HP Labs
* Michael Sannella, TIBCO
* Lukas Stadler, Oracle Labs
* Luke Tierney, University of Iowa
* Simon Urbanek, AT&T Research Labs
* Jan Vitek, Northeastern University
* Gregory Warnes, Boehringer Ingelheim
=== 2016-05-10 Initial phone meeting ===
Attendence:
* Stephen Kaluzny (WG sponsor in ISC)
* Lukas Stadler, Adam Welc, Mark Hornick (authors of the ISC project proposal)
Topics:
* Defining a WG leader (Lukas Stadler)
* Channels on which do distribute call for participation:
:* r-devel mailing list
:* Authors of important packages (top packages in download stats) with native components
:* Alterative and modified implementations: Tibco TERR, renjin, pqr, cxxr/rho, FastR
:* Program committee of the RIOT workshop [[http://riotworkshop.github.io]]
:* Lukas Stadler and Mick Jordan as initial members from Oracle Labs
* Discussion about stopping criteria for the WG / "What is the job of this WG?"
* Plan for meetings and discussions, general openness of all WG communication
832d0e33d033a5f9a01eccbf9c55a08f2232dff8
20
13
2016-06-20T15:21:44Z
Lukasstadler
8
wikitext
text/x-wiki
= Working Group: Future-proof Native APIs for R =
=== 2016-05-10 Initial phone meeting ===
Attendence:
* Stephen Kaluzny (WG sponsor in ISC)
* Lukas Stadler, Adam Welc, Mark Hornick (authors of the ISC project proposal)
Topics:
* Defining a WG leader (Lukas Stadler)
* Channels on which do distribute call for participation:
:* r-devel mailing list
:* Authors of important packages (top packages in download stats) with native components
:* Alterative and modified implementations: Tibco TERR, renjin, pqr, cxxr/rho, FastR
:* Program committee of the RIOT workshop [[http://riotworkshop.github.io]]
:* Lukas Stadler and Mick Jordan as initial members from Oracle Labs
* Discussion about stopping criteria for the WG / "What is the job of this WG?"
* Plan for meetings and discussions, general openness of all WG communication
=== 2016-06-13 Initial WG members ===
* Alexander Bertram, BeDataDriven
* Torsten Hothorn, University of Zurich
* Mick Jordan, Oracle Labs
* Michael Lawrence, Genentech
* Karl Millar, Google
* Duncan Murdoch, University of Western Ontario
* Radford Neal, University of Toronto
* Edzer Pebesma, University of Münster
* Indrajit Roy, HP Labs
* Michael Sannella, TIBCO
* Lukas Stadler, Oracle Labs
* Luke Tierney, University of Iowa
* Simon Urbanek, AT&T Research Labs
* Jan Vitek, Northeastern University
* Gregory Warnes, Boehringer Ingelheim
=== 2016-06-20 Survey of API usage ===
see [[Initial Survey of API Usage]]
1f3f5611985e067cfef14250482f63cd0a7e4bd2
13
12
2016-06-13T09:04:59Z
Lukasstadler
8
wikitext
text/x-wiki
= Working Group: Future-proof Native APIs for R =
* Initial phone meeting: 2016-05-10
=== Initial WG members ===
* Alexander Bertram, BeDataDriven
* Torsten Hothorn, University of Zurich
* Mick Jordan, Oracle Labs
* Michael Lawrence, Genentech
* Karl Millar, Google
* Duncan Murdoch, University of Western Ontario
* Radford Neal, University of Toronto
* Edzer Pebesma, University of Münster
* Indrajit Roy, HP Labs
* Michael Sannella, TIBCO
* Lukas Stadler, Oracle Labs
* Luke Tierney, University of Iowa
* Simon Urbanek, AT&T Research Labs
* Jan Vitek, Northeastern University
* Gregory Warnes, Boehringer Ingelheim
c82fd7c1241c464d103deac4800b0d6f727c2020
12
2016-05-20T16:20:28Z
Skaluzny
7
Created page for the working group
wikitext
text/x-wiki
= Working Group: Future-proof Native APIs for R =
* Initial phone meeting: 2016-05-10
4eadb248b047c512a3312e2bd3b93a67b2d98a81
R Native API call 2016-06-20
0
13
32
2016-06-25T18:34:21Z
Lukasstadler
8
Created page with "An initial meeting intended to provide an opportunity to get an overview of everybody's motivation and view on the matter. ==== A round of introductions ==== *Short introduct..."
wikitext
text/x-wiki
An initial meeting intended to provide an opportunity to get an overview of everybody's motivation and view on the matter.
==== A round of introductions ====
*Short introduction by Stephen Kaluzny (TIBCO, R-consortium ISC member, sponsor for this WG)
*Lukas Stadler (Oracle Labs, FastR project, WG lead)
:*Is there consensus that the R native API could use an overhaul, more documentation, etc.?
:*Some basic points to consider: mall changes vs. big changes, C/C++, separation into different modules
:*[[Initial Survey of API Usage]] - some discussions about the % of API needed by most packages and applications - 10%, 80%, 90%?
*Simon Urbanek (AT&T Labs)
:*Experience from the Aleph project
:*You can get a long way with a small portion of the API
*Luke Tierney (University of Iowa)
:*Has to take care of this if it gets into GNUR
:*Generally interested, implements optimizations in R core
*Alexander Bertram (BeDataDriven, renjin)
:*A lot of the API is just BLAS, etc. - which parts are R specific, how big is the actual interface?
:*APIs should also provide guidance to package developers
:*R core already moved some code out (graphics) which is useful and improves quality and separation
*Radford Neal (University of Toronto)
:*PQR, which naturally sees less reason for drastic change
:*There's two sides: R->C and C->R
::*.C/.Fortran is IMO the preferred way: small surface, implementation performance can be improved, can, e.g., run in parallel in PQR
::*.Call/.External/...: not very well defined, PROTECT is hard to get right, how to use NAMED (or whatever will replace NAMED)
::*What about the embedded R interface
::*The interface between base R and the included packages (stats, …) should also be well-defined
::*The interface contains much more than just header files, .e.g, config files, databases, ...
::*Discussion: Alex, Simon, Lukas, …
:::*data.table - very unique dependency on how internals of GNUR work
:::*Validity of .Call functions that modify their arguments without checking NAMED, many uses of this only work given a very specific behavior of the R runtime
*Mick Jordan (Oracle Labs, FastR)
:*The current R API is an interface to a GNUR-like system, and not to an R runtime
:*A small number of function impls have gotten FastR a long way, most tricky parts are in the "callbacks" (the C->R part)
:*Missing documentation is a problem, makes it hard to implement API
::*(Lukas) Documentation is also a contract that defines what can be expected, which behavior can be depended upon. Otherwise, users will assume that all observed behavior is part of the API.
:*There should be one header file, with documentation
:*Java was in a similar position with the first native API, JNI is the result of that, very well-defined interface that stood the test of time
*Gregory Warnes (Boehringer Ingelheim)
:*RPy, provides a Python interface, generally interested in R native APIs
*Edzer Pebesma (University of Münster)
:*Looking into roh project, extensions
:*Interested to see how R is used in the greater world outside GNUR
*Michael Sannella (TIBCO, TERR, owner of the R-C-level API)
:*Naturally interested in this
:*data.table as an extreme case in native API usage:
::*Managed to get it to work on TERR (talk about this at RIOT [[http://riotworkshop.github.io]])
::*There are no contracts (in the form of documentation, assertions, ...) in the API, data.table uses this to the extreme
::*It exports some of these extreme uses (e.g., changing attributes) to its users
::*A very interesting/challenging case: it’s a very important package, how to handle this? Make all "API" it uses available?
:*IMO, the real problem are that the functions are not well-defined enough, not that there are too many: "whatever you can get away with is defined"
:*Few package authors use the API to the max, the average package author has probably not delved so deep into the interface, not used all of it, because lacking documentation is a barrier
*Michael Lawrence (Genentech)
:*Trying to enable package-level (C-level) extensibility of base R packages
:*E.g., new int-vector implementation as a package
:*(Alex) Pushed for this in renjin, e.g., provide new implementation of int vectors
:*Some more discussion:
::*This would clearly not be possible with the current API because of things like “REAL”
::*Gabriel Becker (works with Michael Lawrence) talks at DSC about a modified GNUR
::*(Alex) renjin has DBI-compatible package that simulates a data.frame from a rolling cursor in a DB
::*Should this be “exchange int vector impl in runtime” (for all int vectors) or “create int vector with this implementation” (operator overloading?)
::*(Mick) R is a very complex system that allows modifying many basic assumptions, should there be more or less complexity?
*Indrajit Roy (HP labs)
:*Extending R with distributed data structures
:*Making API compatible with what people write in the future
:*Questions, playing the “bad guy”:
::*Many here want to make changes to the R internal, to the packages, so that R can be run by alternative implementations
::*A lot of the points - are they about coding practices? or about making R internal code more modular?
::*Maybe we just need to deprecate all the unused functions? What’s the real goal?
::*(Alex) It’s not so much about being able to implement it, it works already, but to make it easier, more efficient, etc.
::*(Mick) There should be an unbiased party in this, with a view not only from R core
::*(Simon) A lot of the discussions on r-devel are about what is the API and what not
:::*A couple independent views:
::::*Documentation, what is the contract of the API?
::::*People have been using internal API and calling for it to be made external
::::*High-level stuff: replacing high-level pieces
::::*How to make things more flexible
::::*The call is a lot about what people think about the API
==== Additional discussions ====
*Is there someone in this call from the rho project?
:*Karl Millar is listening on the mailing list
:*They, e.g., did GC with stack scanning to avoid the need for PROTECT
:*Discussion about whether the API should include GC aspects
*(Lukas) Why is a large part of the interface duplicated on both the R and C side?
:*The interface could be a lot smaller if eval(...) was used in all cases where there's not performance bottleneck (connection functions, etc.)
:*Functions like “as.vector(…)” and R_asVector: sometimes mismatch between R and C version, sometimes similar
:*(Simon) Historical reasons, stems from the R API being taken from the implementation (which is very powerful, but dangerous)
Additional (in-person) discussions will be scheduled for useR! and RIOT
==== Wrap-up ====
(Lukas): One important questions to answer for this WG is: How far do we go
*Enhance/add documentation
*Trimm down the interface (by looking at it's current usage, by looking at what makes sense as an API)
*Extend by replacing tricky parts, with a gradual switchover
*Introducing a consistent API, with breaking changes
*Introduce new APIs for parts that are not covered at the moment (or: include provisions for adding new API in the future)
The big tradeoff is between payoff for GNUR and alternative implementations (more efficient, easier to maintain,…), and increasing effort (and less adoption) on the package side.
52deca65ba2ee04947d3bef89dce04349b7095af
R Native API meeting 2016-06-30
0
14
36
34
2016-07-03T01:41:26Z
Lukasstadler
8
wikitext
text/x-wiki
Informal meeting after the end of the useR! 2016 conference.
Participants: Michael Sannella, Torsten Hothorn, Dirk Eddelbuettel, Karl Millar, Simon Urbanek, Mick Jordan, Lukas Stadler
Discussion topics:
* (Dirk) From the POV of Rcpp, lots of useful functionality is hidden and not part of the official API. It hasn't changed in a long time, why not make available?
:* It's not uncommon that people copy out code to make it available.
:* comment on data.table: it has a tiny dependency trail, and keeps working with very old R versions.
* (Torsten) Packages like stats do not export their functionality at the native level (or there are problems with dependency resolution).
:* Another case where people start copying out code.
:* Is it possible to get symbols from specific package? yes...
* "eval" could be much more efficient if it had a "prepare" and an "execute" step, like prepared DB statements.
:* Combined with a concise API, this would allow much more R functions to be reused on the native side, without a need for explicit C API.
::* Or have simple C wrappers, which can be replaced with a direct implementation in case of performance problems.
::* Do connection functions, e.g., have to be efficient?
::* Makes for good documentation - "behaves like as.integer" (maybe "sans S3/S4 dispatch")
* Is it "future proofing the API" or "future proofing packages"?
* Discussions related to CRAN:
:* Abandoned but popular packages sometimes get fixed by CRAN maintainers.
:* How could a larger set of changes produced by API renamings be handled?
::* Hard in the current system...
::* Having "master" versions of all packages on github would help.
::* Licensing / openness concerns with github?
:* Testing of GNUR with modified API?
::* Many packages require additional steps, installed libraries, etc.
::* Maybe r-hub could help? (Lukas will contact Gabor Csardi)
::* Two levels where changes can cause packages to fail: installing (compiling) and testing (where examples exist)
* What's the reason for the different prefixes?
:* Rf_..., R..., or no prefix, camel case, upper case, underscores, etc.
:* Historical reasons - cleanup could be done with tools or sed scripts.
* USE_RINTERNALS does two things: additional functionality and better performance
:* the former could be achieved by different include files
:* the latter should not be necessary (why not have everything at top speed, but leave the API in a state that can be verified?)
:* it should be possible to create a wrapper around the API that checks the (documented) contract as tightly as possible
* The manual still explains functionality that is generally considered to be wrong (e.g., "TYPE(x) = LANGSXP;")
* There should be no global variables, only functions (or at least a contract that allows them to be implemented as functions)
:* Not only CRAN - we need to describe the universe of (important?) packages.
:* Dependencies between functions? (sic!)
* General steps this WG should/could take:
:* Tighten API - remove stuff that is not used
::* Remove altogether, or deprecate (or hide behind a #define USE_DEPRECATED_API)
:* Renaming functions?
::* Maybe we want to introduce a new naming scheme?
::* Maybe have a period with both naming schemes
:* Document the functions
::* Describe the arguments and its contract.
::* Who could do that? For some functions only core R developers can give a real account of their intended contract.
::* Some functions are tightly related to R functions - maybe describe them in relation to these?
:* Breaking packages is ok, to a certain degree
:* You could do a lot via eval if the details of its behavior were defined well and non-surprising
::* Getting proper error context at the C level?
::* Java solved this with the Java Virtual Machine Tooling API (JVMTI)
:* Maybe create shims of R functions as a new API? docs?
* Immediate next step:
:* (Lukas) Define the "tighten API" task, what it entails, as a (student?) project, and find a "volunteer"
12106a74b94a46a9059e2e9f9889d11d6c832035
34
2016-07-03T01:35:52Z
Lukasstadler
8
Created page with "Informal meeting after the end of the useR! 2016 conference. Participants: Michael Sannella, Torsten Hothorn, Dirk Eddelbuettel, Karl Millar, Simon Urbanek, Mick Jordan, Luka..."
wikitext
text/x-wiki
Informal meeting after the end of the useR! 2016 conference.
Participants: Michael Sannella, Torsten Hothorn, Dirk Eddelbuettel, Karl Millar, Simon Urbanek, Mick Jordan, Lukas Stadler
Incomplete account of discussion topics:
* (Dirk) From the POV of Rcpp, lots of useful functionality is hidden and not part of the official API. It hasn't changed in a long time, why not make available?
:* It's not uncommon that people copy out code to make it available.
:* comment on data.table: it has a tiny dependency trail, and keeps working with very old R versions.
* (Torsten) Packages like stats do not export their functionality at the native level (or there are problems with dependency resolution).
:* Another case where people start copying out code.
:* Is it possible to get symbols from specific package? yes...
* "eval" could be much more efficient if it had a "prepare" and an "execute" step, like prepared DB statements.
:* Combined with a concise API, this would allow much more R functions to be reused on the native side, without a need for explicit C API.
::* Or have simple C wrappers, which can be replaced with a direct implementation in case of performance problems.
::* Do connection functions, e.g., have to be efficient?
::* Makes for good documentation - "behaves like as.integer" (maybe "sans S3/S4 dispatch")
* Is it "future proofing the API" or "future proofing packages"?
* Discussions related to CRAN:
:* Abandoned but popular packages sometimes get fixed by CRAN maintainers.
:* How could a larger set of changes produced by API renamings be handled?
::* Hard in the current system...
::* Having "master" versions of all packages on github would help.
::* Licensing / openness concerns with github?
:* Testing of GNUR with modified API?
::* Many packages require additional steps, installed libraries, etc.
::* Maybe r-hub could help? (Lukas will contact Gabor Csardi)
::* Two levels where changes can cause packages to fail: installing (compiling) and testing (where examples exist)
* What's the reason for the different prefixes?
:* Rf_..., R..., or no prefix, camel case, upper case, underscores, etc.
:* Historical reasons - cleanup could be done with tools or sed scripts.
* USE_RINTERNALS does two things: additional functionality and better performance
:* the former could be achieved by different include files
:* the latter should not be necessary (why not have everything at top speed, but leave the API in a state that can be verified?)
:* it should be possible to create a wrapper around the API that checks the (documented) contract as tightly as possible
* The manual still explains functionality that is generally considered to be wrong (e.g., "TYPE(x) = LANGSXP;")
* There should be no global variables, only functions (or at least a contract that allows them to be implemented as functions)
:* Not only CRAN - we need to describe the universe of (important?) packages.
:* Dependencies between functions? (sic!)
* General steps this WG should/could take:
:* Tighten API - remove stuff that is not used
::* Remove altogether, or deprecate (or hide behind a #define USE_DEPRECATED_API)
:* Renaming functions?
::* Maybe we want to introduce a new naming scheme?
::* Maybe have a period with both naming schemes
:* Document the functions
::* Describe the arguments and its contract.
::* Who could do that? For some functions only core R developers can give a real account of their intended contract.
::* Some functions are tightly related to R functions - maybe describe them in relation to these?
:* Breaking packages is ok, to a certain degree
:* You could do a lot via eval if the details of its behavior were defined well and non-surprising
::* Getting proper error context at the C level?
::* Java solved this with the Java Virtual Machine Tooling API (JVMTI)
:* Maybe create shims of R functions as a new API? docs?
* Immediate next step:
:* (Lukas) Define the "tighten API" task, what it entails, as a (student?) project, and find a "volunteer"
386054720ef2bf5bd554619cf990d906cec66b83
Top Level Projects
0
22
83
2017-11-28T17:27:24Z
Jmertic
19
Created page with "== Top-level Projects == ''Approved by R Consortium TSC on 2017-10-18'' This document outlines a proposed process by which an ISC project might graduate to top-level project,..."
wikitext
text/x-wiki
== Top-level Projects ==
''Approved by R Consortium TSC on 2017-10-18''
This document outlines a proposed process by which an ISC project might graduate to top-level project, as well as the process by which such a project may be terminated. A top-level project implies long-term support (with 3 year review) by the R Consortium for the project, regardless of the person running it.
=== Context ===
We currently have three top-level projecst:
* R-hub, by Gabor Csardi.
* RUGS program (incl. small conferences)
* R-Ladies
Top-level projects imply long term support, and give the project a seat on the ISC.
=== Project Promotion===
Generally, we will consider the following factors when deciding if a project should become a top-level project:
# The project is important, and is having a significant impact on the R community.
# The project has completed one year of successful funding, and delivered their first annual report.
# Commitment (to some extent) independently of the personnel on initial project.
Project would be nominated by ISC member, and confirmed by a simple majority vote. Then ISC chair would reach out to project and discuss budget etc
=== Budget===
The project would prepare a rough 3 year plan, including discussion of personnel (i.e. either a commitment from the original grantee or a transition plan).
Top level project would be allotted a line item on the ISC budget to ensure priority funding.
Upon graduating, any remaining funding from initial grant will return to ISC.
=== Reporting===
The project would then be expected to have a status report at every regular meeting of the ISC to be presented by their ISC representative.
In lieu of regular project proposals, top-level projects would submit a proposed yearly budget by October 31 (in order to get budgeted for following financial year). This would serve as a regular review point for top-level projects, and would occur in a separate meeting to the other project proposals, with the ISC member associated with the project recusing themselves.
=== Concluding support===
Top-level projects will be reviewed every three years. An explicit positive vote would be required to continue funding.
In exceptional circumstances, a top-level project may be terminated at any time by a simple majority vote of the ISC.
A top-level project will typically not be terminated if the grantee resigns, provided that succession plan is in place.
754125936562d03785cd7f8b817881eae3d90414
File:BengtssonH 20170511-future,RConsortium,flat.pdf
6
21
79
2017-05-12T02:52:32Z
Indrajit roy
14
User created page with UploadWizard
wikitext
text/x-wiki
=={{int:filedesc}}==
{{Information
|description={{en|1=Futures in R}}
|date=2017-05-11 19:51:03
|source=Henrik Bengtsson
|author=Henrik Bengtsson
|permission=
|other_versions=
}}
=={{int:license-header}}==
{{subst:uwl}}
[[Category:Uploaded with UploadWizard]]
ce087f26ec85e4e61124e35b6875799d76d60fab
File:Initial Path Visual.PNG
6
17
60
2017-01-21T15:16:20Z
MeharPratapSingh
15
User created page with UploadWizard
wikitext
text/x-wiki
=={{int:filedesc}}==
{{Information
|description={{en|1=The likely path of certification to be taken in the initial cut}}
|date=2017-01-21 07:14:51
|source={{own}}
|author=[[User:MeharPratapSingh|MeharPratapSingh]]
|permission=
|other_versions=
}}
=={{int:license-header}}==
{{self|cc-by-sa-3.0}}
[[Category:Uploaded with UploadWizard]]
7ee32c4fed62023699eb5e88fbf4dd963dddbc40
File:MovingParts Visual.PNG
6
18
61
2017-01-21T15:16:20Z
MeharPratapSingh
15
User created page with UploadWizard
wikitext
text/x-wiki
=={{int:filedesc}}==
{{Information
|description={{en|1=Multiple options currently available for R Certification}}
|date=2017-01-21 07:14:51
|source={{own}}
|author=[[User:MeharPratapSingh|MeharPratapSingh]]
|permission=
|other_versions=
}}
=={{int:license-header}}==
{{self|cc-by-sa-3.0}}
[[Category:Uploaded with UploadWizard]]
cb0b719b7ab2a660d1de7be113652318de878fad
File:Tux.png
6
5
10
2016-05-18T18:14:57Z
Jfarwell
6
User created page with UploadWizard
wikitext
text/x-wiki
=={{int:filedesc}}==
{{Information
|description={{en|1=test2}}
|date=2016-05-18 11:14:44
|source={{own}}
|author=[[User:Jfarwell|Jfarwell]]
|permission=
|other_versions=
}}
=={{int:license-header}}==
{{self|cc-by-sa-3.0}}
[[Category:Uploaded with UploadWizard]]
06d2225203b57b3af0be90faf0c59a7dfe20f6cd
File:Tux6.png
6
4
9
2016-05-18T18:14:56Z
Jfarwell
6
User created page with UploadWizard
wikitext
text/x-wiki
=={{int:filedesc}}==
{{Information
|description={{en|1=test1}}
|date=2016-05-18 11:14:38
|source={{own}}
|author=[[User:Jfarwell|Jfarwell]]
|permission=
|other_versions=
}}
=={{int:license-header}}==
{{self|cc-by-sa-3.0}}
[[Category:Uploaded with UploadWizard]]
401e039cebd18967e10474579d9cbc68a27b025c
MediaWiki:Pt-login
8
2
2
2016-03-05T01:15:44Z
Lfadmin
1
Created page with "Log in / Register with Linux Foundation ID"
wikitext
text/x-wiki
Log in / Register with Linux Foundation ID
63fda8e353ae27d0843ffc3a16f2248f04300f97